Dec 11 16:54:16 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 16:54:16 crc kubenswrapper[5129]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:54:16 crc kubenswrapper[5129]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 16:54:16 crc kubenswrapper[5129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:54:16 crc kubenswrapper[5129]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:54:16 crc kubenswrapper[5129]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 11 16:54:16 crc kubenswrapper[5129]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.306779 5129 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310209 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310231 5129 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310237 5129 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310243 5129 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310249 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310255 5129 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310261 5129 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310268 5129 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310276 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310282 5129 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310288 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310294 5129 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310298 5129 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310304 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310309 5129 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310313 5129 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310318 5129 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310323 5129 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310328 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310333 5129 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310338 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310343 5129 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310348 5129 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310352 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310357 5129 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310362 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310367 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310373 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310378 5129 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310382 5129 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310388 5129 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310393 5129 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310398 5129 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310402 5129 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310407 5129 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310412 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310417 5129 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310422 5129 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310427 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310432 5129 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310437 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310442 5129 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310447 5129 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310452 5129 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310456 5129 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310461 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310467 5129 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310472 5129 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310476 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310481 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310485 5129 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310491 5129 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310496 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310501 5129 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310527 5129 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310532 5129 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310537 5129 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310542 5129 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310547 5129 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310552 5129 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310557 5129 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310563 5129 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310569 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310575 5129 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310580 5129 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310585 5129 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310592 5129 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310597 5129 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310602 5129 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310607 5129 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310612 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310617 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310623 5129 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310629 5129 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310634 5129 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310638 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310643 5129 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310648 5129 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310653 5129 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310658 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310663 5129 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310667 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310672 5129 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310677 5129 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310681 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.310686 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311227 5129 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311235 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311241 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311246 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311251 5129 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311255 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311260 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311265 5129 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311271 5129 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311275 5129 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311280 5129 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311285 5129 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311289 5129 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311294 5129 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311299 5129 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311304 5129 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311310 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311314 5129 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311320 5129 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311330 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311335 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311340 5129 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311345 5129 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311350 5129 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311354 5129 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311359 5129 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311364 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311368 5129 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311373 5129 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311378 5129 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311382 5129 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311387 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311392 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311397 5129 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311402 5129 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311407 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311411 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311416 5129 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311420 5129 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311425 5129 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311430 5129 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311434 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311439 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311447 5129 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311453 5129 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311458 5129 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311464 5129 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311469 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311475 5129 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311480 5129 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311485 5129 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311500 5129 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311505 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311536 5129 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311541 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311546 5129 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311551 5129 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311555 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311560 5129 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311565 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311570 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311574 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311579 5129 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311584 5129 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311588 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311593 5129 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311600 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311604 5129 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311609 5129 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311614 5129 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311619 5129 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311623 5129 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311628 5129 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311633 5129 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311637 5129 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311642 5129 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311648 5129 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311654 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311659 5129 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311664 5129 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311670 5129 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311675 5129 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311679 5129 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311686 5129 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311693 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.311698 5129 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312171 5129 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312187 5129 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312198 5129 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312206 5129 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312214 5129 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312220 5129 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312227 5129 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312242 5129 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312248 5129 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312253 5129 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312259 5129 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312265 5129 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312271 5129 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312276 5129 flags.go:64] FLAG: --cgroup-root="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312282 5129 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312287 5129 flags.go:64] FLAG: --client-ca-file="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312292 5129 flags.go:64] FLAG: --cloud-config="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312297 5129 flags.go:64] FLAG: --cloud-provider="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312302 5129 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312309 5129 flags.go:64] FLAG: --cluster-domain="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312314 5129 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312321 5129 flags.go:64] FLAG: --config-dir="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312326 5129 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312332 5129 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312338 5129 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312344 5129 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312350 5129 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312355 5129 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312361 5129 flags.go:64] FLAG: --contention-profiling="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312368 5129 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312373 5129 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312379 5129 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312385 5129 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312392 5129 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312397 5129 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312403 5129 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312408 5129 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312413 5129 flags.go:64] FLAG: --enable-server="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312419 5129 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312426 5129 flags.go:64] FLAG: --event-burst="100" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312432 5129 flags.go:64] FLAG: --event-qps="50" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312437 5129 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312443 5129 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312448 5129 flags.go:64] FLAG: --eviction-hard="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312455 5129 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312460 5129 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312466 5129 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312480 5129 flags.go:64] FLAG: --eviction-soft="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312486 5129 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312491 5129 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312498 5129 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312503 5129 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312528 5129 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312534 5129 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312539 5129 flags.go:64] FLAG: --feature-gates="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312546 5129 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312552 5129 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312557 5129 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312563 5129 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312569 5129 flags.go:64] FLAG: --healthz-port="10248" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312575 5129 flags.go:64] FLAG: --help="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312580 5129 flags.go:64] FLAG: --hostname-override="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312586 5129 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312591 5129 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312598 5129 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312603 5129 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312609 5129 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312614 5129 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312619 5129 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312624 5129 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312629 5129 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312635 5129 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312640 5129 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312646 5129 flags.go:64] FLAG: --kube-reserved="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312651 5129 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312656 5129 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312662 5129 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312667 5129 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312672 5129 flags.go:64] FLAG: --lock-file="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312703 5129 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312710 5129 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312716 5129 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312726 5129 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312732 5129 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312737 5129 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312742 5129 flags.go:64] FLAG: --logging-format="text" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312748 5129 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312755 5129 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312760 5129 flags.go:64] FLAG: --manifest-url="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312765 5129 flags.go:64] FLAG: --manifest-url-header="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312773 5129 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312779 5129 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312785 5129 flags.go:64] FLAG: --max-pods="110" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312791 5129 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312797 5129 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312802 5129 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312807 5129 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312813 5129 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312819 5129 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312824 5129 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312837 5129 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312842 5129 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312848 5129 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312853 5129 flags.go:64] FLAG: --pod-cidr="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312859 5129 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312868 5129 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312873 5129 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312878 5129 flags.go:64] FLAG: --pods-per-core="0" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312885 5129 flags.go:64] FLAG: --port="10250" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312890 5129 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312896 5129 flags.go:64] FLAG: --provider-id="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312903 5129 flags.go:64] FLAG: --qos-reserved="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312910 5129 flags.go:64] FLAG: --read-only-port="10255" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312917 5129 flags.go:64] FLAG: --register-node="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312924 5129 flags.go:64] FLAG: --register-schedulable="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312930 5129 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312942 5129 flags.go:64] FLAG: --registry-burst="10" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312948 5129 flags.go:64] FLAG: --registry-qps="5" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312955 5129 flags.go:64] FLAG: --reserved-cpus="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312962 5129 flags.go:64] FLAG: --reserved-memory="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312971 5129 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312977 5129 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312984 5129 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312991 5129 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.312998 5129 flags.go:64] FLAG: --runonce="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313004 5129 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313011 5129 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313017 5129 flags.go:64] FLAG: --seccomp-default="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313024 5129 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313030 5129 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313038 5129 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313051 5129 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313058 5129 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313064 5129 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313070 5129 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313077 5129 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313083 5129 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313090 5129 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313097 5129 flags.go:64] FLAG: --system-cgroups="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313104 5129 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313115 5129 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313121 5129 flags.go:64] FLAG: --tls-cert-file="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313128 5129 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313169 5129 flags.go:64] FLAG: --tls-min-version="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313177 5129 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313184 5129 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313192 5129 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313200 5129 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313207 5129 flags.go:64] FLAG: --v="2" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313217 5129 flags.go:64] FLAG: --version="false" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313226 5129 flags.go:64] FLAG: --vmodule="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313234 5129 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313242 5129 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313381 5129 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313388 5129 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313394 5129 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313399 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313404 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313410 5129 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313415 5129 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313420 5129 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313425 5129 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313430 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313438 5129 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313443 5129 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313448 5129 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313453 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313458 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313463 5129 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313467 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313473 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313477 5129 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313482 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313486 5129 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313491 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313498 5129 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313502 5129 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313527 5129 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313533 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313537 5129 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313542 5129 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313547 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313552 5129 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313557 5129 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313561 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313566 5129 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313571 5129 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313576 5129 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313580 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313585 5129 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313590 5129 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313595 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313600 5129 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313606 5129 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313612 5129 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313621 5129 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313629 5129 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313634 5129 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313639 5129 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313644 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313649 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313654 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313659 5129 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313663 5129 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313668 5129 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313673 5129 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313678 5129 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313685 5129 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313690 5129 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313695 5129 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313700 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313704 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313709 5129 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313714 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313719 5129 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313724 5129 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313728 5129 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313733 5129 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313738 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313743 5129 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313748 5129 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313752 5129 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313757 5129 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313762 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313767 5129 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313772 5129 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313776 5129 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313783 5129 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313788 5129 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313794 5129 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313799 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313803 5129 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313808 5129 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313813 5129 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313818 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313823 5129 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313828 5129 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313833 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.313838 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.313850 5129 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.325030 5129 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.325370 5129 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325461 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325471 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325479 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325487 5129 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325496 5129 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325504 5129 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325533 5129 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325542 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325550 5129 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325558 5129 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325565 5129 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325572 5129 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325579 5129 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325586 5129 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325594 5129 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325601 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325608 5129 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325616 5129 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325623 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325633 5129 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325643 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325651 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325658 5129 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325665 5129 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325672 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325679 5129 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325687 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325694 5129 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325701 5129 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325708 5129 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325716 5129 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325727 5129 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325735 5129 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325742 5129 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325749 5129 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325756 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325763 5129 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325770 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325777 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325785 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325792 5129 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325799 5129 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325806 5129 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325813 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325820 5129 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325827 5129 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325835 5129 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325842 5129 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325849 5129 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325859 5129 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325869 5129 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325878 5129 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325886 5129 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325893 5129 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325900 5129 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325907 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325914 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325921 5129 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325928 5129 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325935 5129 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325943 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325950 5129 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325959 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325968 5129 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325979 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.325996 5129 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326013 5129 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326024 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326033 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326043 5129 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326052 5129 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326060 5129 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326069 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326078 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326089 5129 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326098 5129 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326108 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326116 5129 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326123 5129 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326130 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326137 5129 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326144 5129 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326151 5129 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326158 5129 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326165 5129 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326172 5129 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.326184 5129 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326383 5129 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326395 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326405 5129 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326416 5129 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326425 5129 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326433 5129 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326442 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326450 5129 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326458 5129 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326466 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326475 5129 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326483 5129 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326490 5129 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326497 5129 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326505 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326547 5129 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326556 5129 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326563 5129 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326571 5129 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326579 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326586 5129 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326595 5129 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326602 5129 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326609 5129 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326617 5129 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326624 5129 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326631 5129 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326638 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326645 5129 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326652 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326661 5129 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326668 5129 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326676 5129 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326683 5129 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326690 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326699 5129 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326707 5129 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326715 5129 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326723 5129 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326730 5129 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326737 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326744 5129 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326752 5129 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326760 5129 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326768 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326775 5129 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326782 5129 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326790 5129 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326797 5129 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326804 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326811 5129 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326818 5129 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326825 5129 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326832 5129 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326840 5129 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326847 5129 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326854 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326861 5129 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326868 5129 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326875 5129 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326882 5129 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326889 5129 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326896 5129 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326904 5129 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326911 5129 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326918 5129 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326925 5129 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326931 5129 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326939 5129 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326946 5129 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326955 5129 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326962 5129 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326969 5129 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326982 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326989 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.326996 5129 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327005 5129 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327012 5129 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327020 5129 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327027 5129 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327034 5129 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327041 5129 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327048 5129 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327055 5129 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327063 5129 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.327070 5129 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.327081 5129 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.327279 5129 server.go:962] "Client rotation is on, will bootstrap in background" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.330548 5129 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.334201 5129 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.334326 5129 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.334986 5129 server.go:1019] "Starting client certificate rotation" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.335238 5129 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.335755 5129 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.342027 5129 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.346134 5129 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.346387 5129 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.358752 5129 log.go:25] "Validated CRI v1 runtime API" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.375644 5129 log.go:25] "Validated CRI v1 image API" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.377174 5129 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.380239 5129 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-11-16-48-08-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.380296 5129 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.408680 5129 manager.go:217] Machine: {Timestamp:2025-12-11 16:54:16.406159879 +0000 UTC m=+0.209689975 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:460ed1db-5810-4839-a957-07b4c992c443 BootID:ff79d577-6c21-4103-ac1a-4d8d177a81d3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:34:04:d6 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:34:04:d6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:39:2f:ca Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e8:57:12 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9b:ec:c0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f4:ee:84 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8a:96:49:8c:cd:db Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:32:fe:bc:de:72:8b Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.409299 5129 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.409690 5129 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.411391 5129 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.411454 5129 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.411775 5129 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.411794 5129 container_manager_linux.go:306] "Creating device plugin manager" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.411835 5129 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.412106 5129 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.412780 5129 state_mem.go:36] "Initialized new in-memory state store" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.413086 5129 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.413861 5129 kubelet.go:491] "Attempting to sync node with API server" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.414061 5129 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.414106 5129 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.414132 5129 kubelet.go:397] "Adding apiserver pod source" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.414167 5129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.417260 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.417438 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.418338 5129 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.418366 5129 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.420107 5129 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.420158 5129 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.423185 5129 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.423476 5129 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.423856 5129 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424243 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424267 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424275 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424282 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424289 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424296 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424308 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424315 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424325 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424338 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424351 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424478 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424714 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.424734 5129 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.426102 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.435726 5129 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.435845 5129 server.go:1295] "Started kubelet" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.436109 5129 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.436156 5129 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.436801 5129 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.437347 5129 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 16:54:16 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.438154 5129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.18:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880377b40bb4f40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.435756864 +0000 UTC m=+0.239286891,LastTimestamp:2025-12-11 16:54:16.435756864 +0000 UTC m=+0.239286891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.440330 5129 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.440430 5129 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.440711 5129 server.go:317] "Adding debug handlers to kubelet server" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.441312 5129 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.441333 5129 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.441853 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.442036 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="200ms" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.442092 5129 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.442823 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.443524 5129 factory.go:55] Registering systemd factory Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.443634 5129 factory.go:223] Registration of the systemd container factory successfully Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.445396 5129 factory.go:153] Registering CRI-O factory Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.445433 5129 factory.go:223] Registration of the crio container factory successfully Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.445544 5129 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.445571 5129 factory.go:103] Registering Raw factory Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.445586 5129 manager.go:1196] Started watching for new ooms in manager Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.446174 5129 manager.go:319] Starting recovery of all containers Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.483690 5129 manager.go:324] Recovery completed Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.485462 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.485611 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.485686 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.485750 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488220 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488281 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488299 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488335 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488352 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488365 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488377 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488413 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488425 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488438 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488455 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488466 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488497 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488540 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488561 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488576 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488590 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488647 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488661 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488673 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488709 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488721 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488732 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488744 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488781 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488795 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488808 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488820 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488833 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488868 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488880 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488891 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488908 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488920 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488980 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.488992 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489026 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489061 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489072 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489105 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489121 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489132 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489145 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489156 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489189 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489202 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489213 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489224 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489236 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489268 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489282 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489293 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489359 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489375 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489389 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489404 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489448 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489463 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489481 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489496 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489556 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489572 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489587 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489634 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489656 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489672 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489720 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489740 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489758 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489801 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489818 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489835 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489850 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489892 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489913 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489928 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489944 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.489992 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490009 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490025 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490070 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490087 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490105 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490121 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490169 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490187 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490208 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490255 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490273 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490290 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490336 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490358 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490374 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490389 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490437 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490456 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490473 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490537 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490562 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490616 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490637 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490656 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490701 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490719 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490736 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490755 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490802 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490817 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490875 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490891 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490911 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490925 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490969 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.490985 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491003 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491017 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491058 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491070 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491083 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491094 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491106 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491138 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491149 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491161 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491172 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491184 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491215 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491228 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491239 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491253 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491266 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491297 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491310 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491322 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491333 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491345 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491418 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.491435 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492453 5129 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492482 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492533 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492553 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492595 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492608 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492623 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492635 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492647 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492681 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492692 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492704 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492715 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492727 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492761 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492772 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492785 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492796 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492807 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492843 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492855 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492866 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492878 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492888 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492925 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492938 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492950 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492963 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.492974 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493009 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493020 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493031 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493043 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493053 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493086 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493099 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493110 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493121 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493132 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493166 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493180 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493195 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493212 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493254 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493271 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493283 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493293 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493327 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493341 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493352 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493462 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493479 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493651 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493666 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493679 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493716 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493731 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493742 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493754 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493766 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493801 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493812 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493835 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493846 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493882 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493894 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493905 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493916 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493926 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493957 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493969 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493980 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.493991 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494002 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494016 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494049 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494062 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494072 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494084 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494095 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494128 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494140 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494151 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494216 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494253 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494267 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494301 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494311 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494324 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494335 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494346 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494379 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494392 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494403 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494414 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494425 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494454 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494465 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494477 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494491 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494506 5129 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494554 5129 reconstruct.go:97] "Volume reconstruction finished" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.494564 5129 reconciler.go:26] "Reconciler: start to sync state" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.501580 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.503496 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.503790 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.503811 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.505183 5129 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.505210 5129 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.505232 5129 state_mem.go:36] "Initialized new in-memory state store" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.509835 5129 policy_none.go:49] "None policy: Start" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.509867 5129 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.509925 5129 state_mem.go:35] "Initializing new in-memory state store" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.516284 5129 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.519087 5129 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.519130 5129 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.519167 5129 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.519220 5129 kubelet.go:2451] "Starting kubelet main sync loop" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.519342 5129 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.520017 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.541958 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.549217 5129 manager.go:341] "Starting Device Plugin manager" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.549605 5129 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.549638 5129 server.go:85] "Starting device plugin registration server" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.550278 5129 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.550298 5129 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.550559 5129 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.550668 5129 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.550687 5129 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.555567 5129 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.556016 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.619450 5129 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.619702 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.620320 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.620364 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.620381 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.624382 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.624667 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.624733 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.625255 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.625302 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.625325 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.625361 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.625411 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.625422 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.626260 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.626332 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.626363 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.626882 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.626962 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.626984 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.627109 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.627128 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.627137 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.627757 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.627872 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.627906 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.628211 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.628256 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.628272 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.628544 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.628577 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.628589 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.628974 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629073 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629145 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629576 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629606 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629612 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629636 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629649 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.629614 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.630557 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.630593 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.630981 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.631020 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.631038 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.643546 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="400ms" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.648584 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.650429 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.651263 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.651331 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.651344 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.651387 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.652334 5129 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.654943 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.674330 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.692305 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697180 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697387 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697429 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697470 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697496 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697549 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697580 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697601 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697633 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697667 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697699 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697728 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697784 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697819 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697900 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697937 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.697957 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698001 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698005 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698087 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698127 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698168 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698173 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698200 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698235 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698274 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698292 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698368 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698435 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.698625 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.699112 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799570 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799653 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799699 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799736 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799764 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799780 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799874 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799926 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799966 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799968 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.799986 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800024 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800028 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800057 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800081 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800114 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800126 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800143 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800086 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800175 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800187 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800219 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800243 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800263 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800290 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800304 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800331 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800345 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800365 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800386 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800398 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.800504 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.852848 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.853899 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.853942 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.853952 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.853976 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: E1211 16:54:16.854619 5129 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.950288 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.956153 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.975336 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: W1211 16:54:16.991405 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-a7b12d4d0695377bbbdf0231296fb84d8297d1d729da9e8a832919cc6800143c WatchSource:0}: Error finding container a7b12d4d0695377bbbdf0231296fb84d8297d1d729da9e8a832919cc6800143c: Status 404 returned error can't find the container with id a7b12d4d0695377bbbdf0231296fb84d8297d1d729da9e8a832919cc6800143c Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.993764 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.997621 5129 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:54:16 crc kubenswrapper[5129]: I1211 16:54:16.999427 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:17 crc kubenswrapper[5129]: W1211 16:54:17.003364 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-5848b12b11f06e338e3af836372bde1db28a99bd9a300d9e692769de4eeed109 WatchSource:0}: Error finding container 5848b12b11f06e338e3af836372bde1db28a99bd9a300d9e692769de4eeed109: Status 404 returned error can't find the container with id 5848b12b11f06e338e3af836372bde1db28a99bd9a300d9e692769de4eeed109 Dec 11 16:54:17 crc kubenswrapper[5129]: W1211 16:54:17.018181 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-9aff5fb95a4377ad074cdefd56d05dbfc3313a67cf0fb3e0c514d138b9018a3a WatchSource:0}: Error finding container 9aff5fb95a4377ad074cdefd56d05dbfc3313a67cf0fb3e0c514d138b9018a3a: Status 404 returned error can't find the container with id 9aff5fb95a4377ad074cdefd56d05dbfc3313a67cf0fb3e0c514d138b9018a3a Dec 11 16:54:17 crc kubenswrapper[5129]: W1211 16:54:17.021406 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-1efc347b820d60d2be6808659b94752aaf02bcec410577a2766553ad1ebaf959 WatchSource:0}: Error finding container 1efc347b820d60d2be6808659b94752aaf02bcec410577a2766553ad1ebaf959: Status 404 returned error can't find the container with id 1efc347b820d60d2be6808659b94752aaf02bcec410577a2766553ad1ebaf959 Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.045209 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="800ms" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.255202 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.256696 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.256757 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.256768 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.256793 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.257317 5129 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.427887 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.523747 5129 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33" exitCode=0 Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.523840 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.523911 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9aff5fb95a4377ad074cdefd56d05dbfc3313a67cf0fb3e0c514d138b9018a3a"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.524048 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.525692 5129 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59" exitCode=0 Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.525726 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.525765 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"5848b12b11f06e338e3af836372bde1db28a99bd9a300d9e692769de4eeed109"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.525824 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.525850 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.525862 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.525876 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.526115 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.526360 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.526386 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.526397 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.526583 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.527116 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.527142 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"692187d16838cd6ea3c52cfe7e9da0b69ed0ce4664565cd2a5d2dfae455cd9a5"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.527232 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.527843 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.527869 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.527879 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.528003 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.528792 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.528821 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a7b12d4d0695377bbbdf0231296fb84d8297d1d729da9e8a832919cc6800143c"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.530762 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.530788 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1efc347b820d60d2be6808659b94752aaf02bcec410577a2766553ad1ebaf959"} Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.531127 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.532133 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.532173 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:17 crc kubenswrapper[5129]: I1211 16:54:17.532186 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.532405 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.773357 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.846868 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="1.6s" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.875272 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:54:17 crc kubenswrapper[5129]: E1211 16:54:17.910584 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.000076 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.057598 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.058368 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.058427 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.058437 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.058458 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.058863 5129 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.18:6443: connect: connection refused" node="crc" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.427184 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.18:6443: connect: connection refused Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.431198 5129 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.433904 5129 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.534843 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54" exitCode=0 Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.534898 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54"} Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.535178 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.536094 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.536130 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.536145 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.536403 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.536525 5129 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b" exitCode=0 Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.536581 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b"} Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.536951 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.537990 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.538425 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.538449 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.538460 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.538644 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.538765 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.538796 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.538808 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.539972 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"8faab81a2b9f03a74368e14568cc8b7b928132eef181ee297d2fbad86f5fb194"} Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.540246 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.541159 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.541197 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.541211 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.541380 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.542376 5129 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de" exitCode=0 Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.542434 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de"} Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.542567 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.543010 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.543035 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.543046 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.543166 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.552406 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5c934b2c22637164c8d767636f1daecb334588708bfe1bad7c8292922847f7ed"} Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.552446 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5bdd0c143fa7e8812638159329a3e152d6d88c66c8e0fb790ae35c0ded8176e1"} Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.552458 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3d512a17000ca709c3c084a435e8fcbecf28038516c0a11190f2385d68ae16fc"} Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.552582 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.553019 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.553041 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:18 crc kubenswrapper[5129]: I1211 16:54:18.553053 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:18 crc kubenswrapper[5129]: E1211 16:54:18.553226 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.565153 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"df646ec52f7a1cf49d9303ebccd8de6422fa94c4907a596b63278216fc07ebcb"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.565248 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e4c5571003912b3a12d9b8e7230f22fd588dae784e943736ea11373f2dcd2baa"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.565277 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"32ffe9b5be1ad35ddd9febeb1f98d097ff984ae3bd337ebbbe14d99170d8489a"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.565591 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.566570 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.566642 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.566656 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:19 crc kubenswrapper[5129]: E1211 16:54:19.566970 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.583088 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.583141 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.583169 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.583181 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.594401 5129 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7" exitCode=0 Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.594485 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7"} Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.594568 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.594724 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.595073 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.595104 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.595115 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:19 crc kubenswrapper[5129]: E1211 16:54:19.595392 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.595944 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.595990 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.596009 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:19 crc kubenswrapper[5129]: E1211 16:54:19.596290 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.659839 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.661035 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.661076 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.661087 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.661112 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:19 crc kubenswrapper[5129]: I1211 16:54:19.754580 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.607190 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0ddbf587a32eb3e4021dcf69dd754989d8d18bcb17d92b21567e9baabbf01c01"} Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.607287 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.608009 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.608068 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.608085 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:20 crc kubenswrapper[5129]: E1211 16:54:20.608408 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.610776 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"e16a35e61d8e2ff1ef59921f54ada877c2429ae4dd9b1dfda1ef5de602cea580"} Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.610820 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1983a596cbbf41969328c6642b06b8abba3cc5ae8b162c4d87603de486e45587"} Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.610847 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.610849 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5654ca63508057b717f80c16ebe5d6d0766d4282449ac01571c7a04945749180"} Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.610958 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"09ab56e9007d2a650254d1000ce66094953c4e0e92b21cf18755434ff792f630"} Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.611201 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.611238 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:20 crc kubenswrapper[5129]: I1211 16:54:20.611249 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:20 crc kubenswrapper[5129]: E1211 16:54:20.611503 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.619441 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"14aaa84bd234f14470da0a92e12408314e20785eb32082c15df56c66488831bf"} Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.619579 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.619602 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.619578 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.620467 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.620549 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.620570 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:21 crc kubenswrapper[5129]: E1211 16:54:21.621044 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.621688 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.621769 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:21 crc kubenswrapper[5129]: I1211 16:54:21.621867 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:21 crc kubenswrapper[5129]: E1211 16:54:21.622639 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.418873 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.419206 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.421163 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.421237 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.421257 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:22 crc kubenswrapper[5129]: E1211 16:54:22.421934 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.433553 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.488708 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.623229 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.623342 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.623488 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624609 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624684 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624712 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624604 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624813 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624841 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624915 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624956 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.624979 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:22 crc kubenswrapper[5129]: E1211 16:54:22.625686 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:22 crc kubenswrapper[5129]: E1211 16:54:22.626013 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:22 crc kubenswrapper[5129]: E1211 16:54:22.626417 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.754903 5129 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.755053 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 16:54:22 crc kubenswrapper[5129]: I1211 16:54:22.802194 5129 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.625762 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.626810 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.626867 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.626896 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:23 crc kubenswrapper[5129]: E1211 16:54:23.627728 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.874660 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.875068 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.876395 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.876458 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.876479 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:23 crc kubenswrapper[5129]: E1211 16:54:23.877084 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.899393 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.899783 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.901434 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.901506 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:23 crc kubenswrapper[5129]: I1211 16:54:23.901564 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:23 crc kubenswrapper[5129]: E1211 16:54:23.902036 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:24 crc kubenswrapper[5129]: I1211 16:54:24.919440 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:24 crc kubenswrapper[5129]: I1211 16:54:24.919952 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:24 crc kubenswrapper[5129]: I1211 16:54:24.921569 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:24 crc kubenswrapper[5129]: I1211 16:54:24.921637 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:24 crc kubenswrapper[5129]: I1211 16:54:24.921657 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:24 crc kubenswrapper[5129]: E1211 16:54:24.922286 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.090044 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.090464 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.091976 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.092041 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.092072 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:25 crc kubenswrapper[5129]: E1211 16:54:25.092930 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.482939 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.483167 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.483976 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.484003 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:25 crc kubenswrapper[5129]: I1211 16:54:25.484018 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:25 crc kubenswrapper[5129]: E1211 16:54:25.484287 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:26 crc kubenswrapper[5129]: E1211 16:54:26.556375 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.101160 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.101385 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.102565 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.102605 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.102616 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:27 crc kubenswrapper[5129]: E1211 16:54:27.102920 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.107809 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.636068 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.636836 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.636898 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:27 crc kubenswrapper[5129]: I1211 16:54:27.636914 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:27 crc kubenswrapper[5129]: E1211 16:54:27.637261 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:29 crc kubenswrapper[5129]: I1211 16:54:29.427747 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 11 16:54:29 crc kubenswrapper[5129]: I1211 16:54:29.445530 5129 trace.go:236] Trace[1398663936]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:54:19.443) (total time: 10001ms): Dec 11 16:54:29 crc kubenswrapper[5129]: Trace[1398663936]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:54:29.445) Dec 11 16:54:29 crc kubenswrapper[5129]: Trace[1398663936]: [10.00178116s] [10.00178116s] END Dec 11 16:54:29 crc kubenswrapper[5129]: E1211 16:54:29.445567 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:54:29 crc kubenswrapper[5129]: E1211 16:54:29.448004 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 11 16:54:29 crc kubenswrapper[5129]: E1211 16:54:29.662414 5129 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 11 16:54:30 crc kubenswrapper[5129]: I1211 16:54:30.139800 5129 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 16:54:30 crc kubenswrapper[5129]: I1211 16:54:30.139888 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 16:54:30 crc kubenswrapper[5129]: I1211 16:54:30.153864 5129 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 16:54:30 crc kubenswrapper[5129]: I1211 16:54:30.153948 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 16:54:30 crc kubenswrapper[5129]: I1211 16:54:30.160958 5129 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 16:54:30 crc kubenswrapper[5129]: I1211 16:54:30.161019 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 16:54:32 crc kubenswrapper[5129]: E1211 16:54:32.656605 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 11 16:54:32 crc kubenswrapper[5129]: I1211 16:54:32.756169 5129 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 16:54:32 crc kubenswrapper[5129]: I1211 16:54:32.756305 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 16:54:32 crc kubenswrapper[5129]: I1211 16:54:32.863405 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:32 crc kubenswrapper[5129]: I1211 16:54:32.864904 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:32 crc kubenswrapper[5129]: I1211 16:54:32.864995 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:32 crc kubenswrapper[5129]: I1211 16:54:32.865024 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:32 crc kubenswrapper[5129]: I1211 16:54:32.865076 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:32 crc kubenswrapper[5129]: E1211 16:54:32.880385 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:54:33 crc kubenswrapper[5129]: E1211 16:54:33.803581 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.879702 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.880043 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.880570 5129 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.880635 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.881213 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.881258 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.881278 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:33 crc kubenswrapper[5129]: E1211 16:54:33.881834 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:33 crc kubenswrapper[5129]: I1211 16:54:33.885574 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:34 crc kubenswrapper[5129]: I1211 16:54:34.662356 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:34 crc kubenswrapper[5129]: I1211 16:54:34.663000 5129 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 16:54:34 crc kubenswrapper[5129]: I1211 16:54:34.663094 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 16:54:34 crc kubenswrapper[5129]: I1211 16:54:34.663664 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:34 crc kubenswrapper[5129]: I1211 16:54:34.663743 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:34 crc kubenswrapper[5129]: I1211 16:54:34.663769 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:34 crc kubenswrapper[5129]: E1211 16:54:34.664606 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.123634 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.123954 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.125006 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.125069 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.125095 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.125816 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.144349 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.151609 5129 trace.go:236] Trace[2019851542]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:54:20.155) (total time: 14996ms): Dec 11 16:54:35 crc kubenswrapper[5129]: Trace[2019851542]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14996ms (16:54:35.151) Dec 11 16:54:35 crc kubenswrapper[5129]: Trace[2019851542]: [14.996323118s] [14.996323118s] END Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.151675 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.151686 5129 trace.go:236] Trace[38134273]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:54:20.906) (total time: 14244ms): Dec 11 16:54:35 crc kubenswrapper[5129]: Trace[38134273]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14244ms (16:54:35.151) Dec 11 16:54:35 crc kubenswrapper[5129]: Trace[38134273]: [14.244776671s] [14.244776671s] END Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.151727 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.151715 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b40bb4f40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.435756864 +0000 UTC m=+0.239286891,LastTimestamp:2025-12-11 16:54:16.435756864 +0000 UTC m=+0.239286891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.152899 5129 trace.go:236] Trace[1493847556]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:54:20.437) (total time: 14714ms): Dec 11 16:54:35 crc kubenswrapper[5129]: Trace[1493847556]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14714ms (16:54:35.152) Dec 11 16:54:35 crc kubenswrapper[5129]: Trace[1493847556]: [14.714907705s] [14.714907705s] END Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.153055 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.152967 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.157432 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.162125 5129 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.163089 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9dadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,LastTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.168616 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b47e4b6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.555910883 +0000 UTC m=+0.359440900,LastTimestamp:2025-12-11 16:54:16.555910883 +0000 UTC m=+0.359440900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.175712 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c92720\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.620342993 +0000 UTC m=+0.423873010,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.183567 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9979c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.620372389 +0000 UTC m=+0.423902406,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.188453 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9dadc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9dadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,LastTimestamp:2025-12-11 16:54:16.620387356 +0000 UTC m=+0.423917373,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.197977 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c92720\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.625281922 +0000 UTC m=+0.428811959,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.203898 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9979c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.625313436 +0000 UTC m=+0.428843473,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.210802 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9dadc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9dadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,LastTimestamp:2025-12-11 16:54:16.625333219 +0000 UTC m=+0.428863256,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.217232 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c92720\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.625398495 +0000 UTC m=+0.428928512,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.222061 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9979c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.62541665 +0000 UTC m=+0.428946667,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.226137 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9dadc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9dadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,LastTimestamp:2025-12-11 16:54:16.625427871 +0000 UTC m=+0.428957888,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.230375 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c92720\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.626900335 +0000 UTC m=+0.430430362,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.235053 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9979c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.626973833 +0000 UTC m=+0.430503860,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.240006 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9dadc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9dadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,LastTimestamp:2025-12-11 16:54:16.626992648 +0000 UTC m=+0.430522685,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.246556 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c92720\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.62712236 +0000 UTC m=+0.430652377,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.251355 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9979c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.62713391 +0000 UTC m=+0.430663927,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.256167 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9dadc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9dadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,LastTimestamp:2025-12-11 16:54:16.627141874 +0000 UTC m=+0.430671891,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.262959 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c92720\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.628232366 +0000 UTC m=+0.431762383,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.270277 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9979c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.62826446 +0000 UTC m=+0.431794467,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.274287 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9dadc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9dadc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503818972 +0000 UTC m=+0.307348999,LastTimestamp:2025-12-11 16:54:16.628278888 +0000 UTC m=+0.431808905,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.278547 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c92720\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c92720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.50377296 +0000 UTC m=+0.307302997,LastTimestamp:2025-12-11 16:54:16.628561493 +0000 UTC m=+0.432091510,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.283096 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1880377b44c9979c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1880377b44c9979c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.503801756 +0000 UTC m=+0.307331793,LastTimestamp:2025-12-11 16:54:16.628583484 +0000 UTC m=+0.432113501,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.290399 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377b623f0bec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.998038508 +0000 UTC m=+0.801568555,LastTimestamp:2025-12-11 16:54:16.998038508 +0000 UTC m=+0.801568555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.295469 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377b6240fecd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:16.998166221 +0000 UTC m=+0.801696268,LastTimestamp:2025-12-11 16:54:16.998166221 +0000 UTC m=+0.801696268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.305363 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880377b62cf08ff openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.007474943 +0000 UTC m=+0.811004960,LastTimestamp:2025-12-11 16:54:17.007474943 +0000 UTC m=+0.811004960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.309886 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377b63ba0de1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.022877153 +0000 UTC m=+0.826407160,LastTimestamp:2025-12-11 16:54:17.022877153 +0000 UTC m=+0.826407160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.313822 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377b63f0cd8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.026465166 +0000 UTC m=+0.829995203,LastTimestamp:2025-12-11 16:54:17.026465166 +0000 UTC m=+0.829995203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.319054 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377b7e390579 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.467405689 +0000 UTC m=+1.270935706,LastTimestamp:2025-12-11 16:54:17.467405689 +0000 UTC m=+1.270935706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.324250 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377b7e4cee70 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.468710512 +0000 UTC m=+1.272240529,LastTimestamp:2025-12-11 16:54:17.468710512 +0000 UTC m=+1.272240529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.333485 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377b7e795ee8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.471622888 +0000 UTC m=+1.275152905,LastTimestamp:2025-12-11 16:54:17.471622888 +0000 UTC m=+1.275152905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.338141 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880377b7e79f5f4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.471661556 +0000 UTC m=+1.275191573,LastTimestamp:2025-12-11 16:54:17.471661556 +0000 UTC m=+1.275191573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.343366 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377b7e7ab0ac openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.471709356 +0000 UTC m=+1.275239393,LastTimestamp:2025-12-11 16:54:17.471709356 +0000 UTC m=+1.275239393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.348258 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377b7ee10b29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.478417193 +0000 UTC m=+1.281947210,LastTimestamp:2025-12-11 16:54:17.478417193 +0000 UTC m=+1.281947210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.355998 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377b7ef842f6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.479938806 +0000 UTC m=+1.283468833,LastTimestamp:2025-12-11 16:54:17.479938806 +0000 UTC m=+1.283468833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.361162 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377b7f1d49ac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.482365356 +0000 UTC m=+1.285895373,LastTimestamp:2025-12-11 16:54:17.482365356 +0000 UTC m=+1.285895373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.363090 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377b7f481817 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.485170711 +0000 UTC m=+1.288700738,LastTimestamp:2025-12-11 16:54:17.485170711 +0000 UTC m=+1.288700738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.367027 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880377b7f565629 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.486104105 +0000 UTC m=+1.289634122,LastTimestamp:2025-12-11 16:54:17.486104105 +0000 UTC m=+1.289634122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.374256 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377b7f590b91 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.486281617 +0000 UTC m=+1.289811634,LastTimestamp:2025-12-11 16:54:17.486281617 +0000 UTC m=+1.289811634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.383768 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880377b81d082a9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.527665321 +0000 UTC m=+1.331195338,LastTimestamp:2025-12-11 16:54:17.527665321 +0000 UTC m=+1.331195338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.391789 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377b81d09579 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.527670137 +0000 UTC m=+1.331200154,LastTimestamp:2025-12-11 16:54:17.527670137 +0000 UTC m=+1.331200154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.399706 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377b90db6587 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.780036999 +0000 UTC m=+1.583567056,LastTimestamp:2025-12-11 16:54:17.780036999 +0000 UTC m=+1.583567056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.405170 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377b917a83dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.790464989 +0000 UTC m=+1.593995026,LastTimestamp:2025-12-11 16:54:17.790464989 +0000 UTC m=+1.593995026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.410726 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880377b917f12da openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.790763738 +0000 UTC m=+1.594293795,LastTimestamp:2025-12-11 16:54:17.790763738 +0000 UTC m=+1.594293795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.415638 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377b91924ce2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.792023778 +0000 UTC m=+1.595553805,LastTimestamp:2025-12-11 16:54:17.792023778 +0000 UTC m=+1.595553805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.421055 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377b919fc530 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.792906544 +0000 UTC m=+1.596436571,LastTimestamp:2025-12-11 16:54:17.792906544 +0000 UTC m=+1.596436571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.426444 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880377b92992e33 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.809251891 +0000 UTC m=+1.612781928,LastTimestamp:2025-12-11 16:54:17.809251891 +0000 UTC m=+1.612781928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.432283 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.433171 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377b93a4a9e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:17.826781673 +0000 UTC m=+1.630311710,LastTimestamp:2025-12-11 16:54:17.826781673 +0000 UTC m=+1.630311710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.440051 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377ba8b70a32 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.180307506 +0000 UTC m=+1.983837523,LastTimestamp:2025-12-11 16:54:18.180307506 +0000 UTC m=+1.983837523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.445908 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377ba95f2d35 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.191326517 +0000 UTC m=+1.994856564,LastTimestamp:2025-12-11 16:54:18.191326517 +0000 UTC m=+1.994856564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.450650 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377ba9739852 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.192664658 +0000 UTC m=+1.996194675,LastTimestamp:2025-12-11 16:54:18.192664658 +0000 UTC m=+1.996194675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.458649 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377bb8361a30 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.440292912 +0000 UTC m=+2.243822930,LastTimestamp:2025-12-11 16:54:18.440292912 +0000 UTC m=+2.243822930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.464420 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377bb8dd7956 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.451261782 +0000 UTC m=+2.254791799,LastTimestamp:2025-12-11 16:54:18.451261782 +0000 UTC m=+2.254791799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.467184 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bbe0392a2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.537644706 +0000 UTC m=+2.341174723,LastTimestamp:2025-12-11 16:54:18.537644706 +0000 UTC m=+2.341174723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.469919 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377bbe27eaac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.54002654 +0000 UTC m=+2.343556567,LastTimestamp:2025-12-11 16:54:18.54002654 +0000 UTC m=+2.343556567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.474790 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377bbeb53d68 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.549288296 +0000 UTC m=+2.352818333,LastTimestamp:2025-12-11 16:54:18.549288296 +0000 UTC m=+2.352818333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.481214 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bcbf9b03a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.771877946 +0000 UTC m=+2.575407973,LastTimestamp:2025-12-11 16:54:18.771877946 +0000 UTC m=+2.575407973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.486121 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377bcc361cc0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.775837888 +0000 UTC m=+2.579367905,LastTimestamp:2025-12-11 16:54:18.775837888 +0000 UTC m=+2.579367905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.494863 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377bcc9f1a4e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.782718542 +0000 UTC m=+2.586248569,LastTimestamp:2025-12-11 16:54:18.782718542 +0000 UTC m=+2.586248569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.501575 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bccc02078 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.784882808 +0000 UTC m=+2.588412835,LastTimestamp:2025-12-11 16:54:18.784882808 +0000 UTC m=+2.588412835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.508810 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bccd427a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.786195369 +0000 UTC m=+2.589725396,LastTimestamp:2025-12-11 16:54:18.786195369 +0000 UTC m=+2.589725396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.515235 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377bccd895b9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.786485689 +0000 UTC m=+2.590015726,LastTimestamp:2025-12-11 16:54:18.786485689 +0000 UTC m=+2.590015726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.521142 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377bccea5b50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.787650384 +0000 UTC m=+2.591180411,LastTimestamp:2025-12-11 16:54:18.787650384 +0000 UTC m=+2.591180411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.526443 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377bcda3060f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.799752719 +0000 UTC m=+2.603282746,LastTimestamp:2025-12-11 16:54:18.799752719 +0000 UTC m=+2.603282746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.533031 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bd912722b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.991604267 +0000 UTC m=+2.795134294,LastTimestamp:2025-12-11 16:54:18.991604267 +0000 UTC m=+2.795134294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.539271 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377bd92ea27c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:18.993451644 +0000 UTC m=+2.796981672,LastTimestamp:2025-12-11 16:54:18.993451644 +0000 UTC m=+2.796981672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.545325 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bd9976228 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.000316456 +0000 UTC m=+2.803846503,LastTimestamp:2025-12-11 16:54:19.000316456 +0000 UTC m=+2.803846503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.550793 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bd9a81f37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.001413431 +0000 UTC m=+2.804943458,LastTimestamp:2025-12-11 16:54:19.001413431 +0000 UTC m=+2.804943458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.556882 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377bd9ab8aa2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.001637538 +0000 UTC m=+2.805167555,LastTimestamp:2025-12-11 16:54:19.001637538 +0000 UTC m=+2.805167555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.561799 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377bd9c4ae06 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.003284998 +0000 UTC m=+2.806815015,LastTimestamp:2025-12-11 16:54:19.003284998 +0000 UTC m=+2.806815015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.567985 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377be6136d33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.209772339 +0000 UTC m=+3.013302356,LastTimestamp:2025-12-11 16:54:19.209772339 +0000 UTC m=+3.013302356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.570165 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377be61a50d9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.210223833 +0000 UTC m=+3.013753850,LastTimestamp:2025-12-11 16:54:19.210223833 +0000 UTC m=+3.013753850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.572347 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377be68e20c2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.217813698 +0000 UTC m=+3.021343715,LastTimestamp:2025-12-11 16:54:19.217813698 +0000 UTC m=+3.021343715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.573827 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377be69a0685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.218593413 +0000 UTC m=+3.022123430,LastTimestamp:2025-12-11 16:54:19.218593413 +0000 UTC m=+3.022123430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.578401 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880377be6b12e20 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.22011088 +0000 UTC m=+3.023640897,LastTimestamp:2025-12-11 16:54:19.22011088 +0000 UTC m=+3.023640897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.579456 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bf30c33d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.427402706 +0000 UTC m=+3.230932723,LastTimestamp:2025-12-11 16:54:19.427402706 +0000 UTC m=+3.230932723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.585718 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bf3c48985 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.439483269 +0000 UTC m=+3.243013296,LastTimestamp:2025-12-11 16:54:19.439483269 +0000 UTC m=+3.243013296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.591183 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bf3d4dd2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.440553263 +0000 UTC m=+3.244083280,LastTimestamp:2025-12-11 16:54:19.440553263 +0000 UTC m=+3.244083280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.597039 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377bfd30775c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.597551452 +0000 UTC m=+3.401081469,LastTimestamp:2025-12-11 16:54:19.597551452 +0000 UTC m=+3.401081469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.606327 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377c038cadfd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.704258045 +0000 UTC m=+3.507788062,LastTimestamp:2025-12-11 16:54:19.704258045 +0000 UTC m=+3.507788062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.613284 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377c04502fc8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.717070792 +0000 UTC m=+3.520600809,LastTimestamp:2025-12-11 16:54:19.717070792 +0000 UTC m=+3.520600809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.618695 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c09d18ac7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.809434311 +0000 UTC m=+3.612964358,LastTimestamp:2025-12-11 16:54:19.809434311 +0000 UTC m=+3.612964358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.636101 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c0a61ca07 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.818887687 +0000 UTC m=+3.622417704,LastTimestamp:2025-12-11 16:54:19.818887687 +0000 UTC m=+3.622417704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.641580 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c0a726eab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.819978411 +0000 UTC m=+3.623508458,LastTimestamp:2025-12-11 16:54:19.819978411 +0000 UTC m=+3.623508458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.647178 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c18c26606 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.060100102 +0000 UTC m=+3.863630119,LastTimestamp:2025-12-11 16:54:20.060100102 +0000 UTC m=+3.863630119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.651996 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c19a42232 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.074893874 +0000 UTC m=+3.878423901,LastTimestamp:2025-12-11 16:54:20.074893874 +0000 UTC m=+3.878423901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.656536 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c19b5a351 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.076041041 +0000 UTC m=+3.879571058,LastTimestamp:2025-12-11 16:54:20.076041041 +0000 UTC m=+3.879571058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.663488 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c287f3806 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.32413287 +0000 UTC m=+4.127662887,LastTimestamp:2025-12-11 16:54:20.32413287 +0000 UTC m=+4.127662887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.665172 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.666581 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0ddbf587a32eb3e4021dcf69dd754989d8d18bcb17d92b21567e9baabbf01c01" exitCode=255 Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.666660 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"0ddbf587a32eb3e4021dcf69dd754989d8d18bcb17d92b21567e9baabbf01c01"} Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.666828 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.666861 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.667377 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.667409 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.667379 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.667422 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.667437 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.667450 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.667860 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.667817 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c29643899 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.339140761 +0000 UTC m=+4.142670798,LastTimestamp:2025-12-11 16:54:20.339140761 +0000 UTC m=+4.142670798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.668032 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:35 crc kubenswrapper[5129]: I1211 16:54:35.668275 5129 scope.go:117] "RemoveContainer" containerID="0ddbf587a32eb3e4021dcf69dd754989d8d18bcb17d92b21567e9baabbf01c01" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.677127 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c297d433a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.340781882 +0000 UTC m=+4.144311909,LastTimestamp:2025-12-11 16:54:20.340781882 +0000 UTC m=+4.144311909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.681695 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c349e3844 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.52749114 +0000 UTC m=+4.331021157,LastTimestamp:2025-12-11 16:54:20.52749114 +0000 UTC m=+4.331021157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.687433 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c353675fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.537468412 +0000 UTC m=+4.340998429,LastTimestamp:2025-12-11 16:54:20.537468412 +0000 UTC m=+4.340998429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.695504 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c3546b3da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.538532826 +0000 UTC m=+4.342062843,LastTimestamp:2025-12-11 16:54:20.538532826 +0000 UTC m=+4.342062843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.702064 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c41465a04 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.73983642 +0000 UTC m=+4.543366437,LastTimestamp:2025-12-11 16:54:20.73983642 +0000 UTC m=+4.543366437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.707432 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880377c41d38f96 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:20.74909071 +0000 UTC m=+4.552620717,LastTimestamp:2025-12-11 16:54:20.74909071 +0000 UTC m=+4.552620717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.713078 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 11 16:54:35 crc kubenswrapper[5129]: &Event{ObjectMeta:{kube-controller-manager-crc.1880377cb96358ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 16:54:35 crc kubenswrapper[5129]: body: Dec 11 16:54:35 crc kubenswrapper[5129]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:22.755002623 +0000 UTC m=+6.558532690,LastTimestamp:2025-12-11 16:54:22.755002623 +0000 UTC m=+6.558532690,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:54:35 crc kubenswrapper[5129]: > Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.717599 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377cb9657d2e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:22.755142958 +0000 UTC m=+6.558673005,LastTimestamp:2025-12-11 16:54:22.755142958 +0000 UTC m=+6.558673005,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.725866 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:54:35 crc kubenswrapper[5129]: &Event{ObjectMeta:{kube-apiserver-crc.1880377e718f4938 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 11 16:54:35 crc kubenswrapper[5129]: body: Dec 11 16:54:35 crc kubenswrapper[5129]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:30.139857208 +0000 UTC m=+13.943387235,LastTimestamp:2025-12-11 16:54:30.139857208 +0000 UTC m=+13.943387235,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:54:35 crc kubenswrapper[5129]: > Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.732421 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377e719049d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:30.1399229 +0000 UTC m=+13.943452927,LastTimestamp:2025-12-11 16:54:30.1399229 +0000 UTC m=+13.943452927,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.739986 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:54:35 crc kubenswrapper[5129]: &Event{ObjectMeta:{kube-apiserver-crc.1880377e7265dd55 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 11 16:54:35 crc kubenswrapper[5129]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 16:54:35 crc kubenswrapper[5129]: Dec 11 16:54:35 crc kubenswrapper[5129]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:30.153919829 +0000 UTC m=+13.957449876,LastTimestamp:2025-12-11 16:54:30.153919829 +0000 UTC m=+13.957449876,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:54:35 crc kubenswrapper[5129]: > Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.745086 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377e7266bf46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:30.15397767 +0000 UTC m=+13.957507717,LastTimestamp:2025-12-11 16:54:30.15397767 +0000 UTC m=+13.957507717,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.760178 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377e7265dd55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:54:35 crc kubenswrapper[5129]: &Event{ObjectMeta:{kube-apiserver-crc.1880377e7265dd55 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 11 16:54:35 crc kubenswrapper[5129]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 16:54:35 crc kubenswrapper[5129]: Dec 11 16:54:35 crc kubenswrapper[5129]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:30.153919829 +0000 UTC m=+13.957449876,LastTimestamp:2025-12-11 16:54:30.160997081 +0000 UTC m=+13.964527108,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:54:35 crc kubenswrapper[5129]: > Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.767333 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377e7266bf46\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377e7266bf46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:30.15397767 +0000 UTC m=+13.957507717,LastTimestamp:2025-12-11 16:54:30.161037962 +0000 UTC m=+13.964567989,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.778319 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1880377cb96358ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 11 16:54:35 crc kubenswrapper[5129]: &Event{ObjectMeta:{kube-controller-manager-crc.1880377cb96358ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 16:54:35 crc kubenswrapper[5129]: body: Dec 11 16:54:35 crc kubenswrapper[5129]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:22.755002623 +0000 UTC m=+6.558532690,LastTimestamp:2025-12-11 16:54:32.756266682 +0000 UTC m=+16.559796739,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:54:35 crc kubenswrapper[5129]: > Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.789167 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1880377cb9657d2e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880377cb9657d2e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:22.755142958 +0000 UTC m=+6.558673005,LastTimestamp:2025-12-11 16:54:32.756339934 +0000 UTC m=+16.559869991,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.792814 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:54:35 crc kubenswrapper[5129]: &Event{ObjectMeta:{kube-apiserver-crc.1880377f5086be75 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 11 16:54:35 crc kubenswrapper[5129]: body: Dec 11 16:54:35 crc kubenswrapper[5129]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:33.880616565 +0000 UTC m=+17.684146612,LastTimestamp:2025-12-11 16:54:33.880616565 +0000 UTC m=+17.684146612,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:54:35 crc kubenswrapper[5129]: > Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.796401 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377f508774ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:33.880663226 +0000 UTC m=+17.684193273,LastTimestamp:2025-12-11 16:54:33.880663226 +0000 UTC m=+17.684193273,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.800957 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377f5086be75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:54:35 crc kubenswrapper[5129]: &Event{ObjectMeta:{kube-apiserver-crc.1880377f5086be75 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 11 16:54:35 crc kubenswrapper[5129]: body: Dec 11 16:54:35 crc kubenswrapper[5129]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:33.880616565 +0000 UTC m=+17.684146612,LastTimestamp:2025-12-11 16:54:34.663063109 +0000 UTC m=+18.466593166,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:54:35 crc kubenswrapper[5129]: > Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.806649 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377f508774ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377f508774ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:33.880663226 +0000 UTC m=+17.684193273,LastTimestamp:2025-12-11 16:54:34.663131471 +0000 UTC m=+18.466661538,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.816032 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377bf3d4dd2f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bf3d4dd2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.440553263 +0000 UTC m=+3.244083280,LastTimestamp:2025-12-11 16:54:35.669605974 +0000 UTC m=+19.473135991,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:35 crc kubenswrapper[5129]: E1211 16:54:35.999752 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377c038cadfd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377c038cadfd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.704258045 +0000 UTC m=+3.507788062,LastTimestamp:2025-12-11 16:54:35.996242193 +0000 UTC m=+19.799772210,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:36 crc kubenswrapper[5129]: E1211 16:54:36.008642 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377c04502fc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377c04502fc8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.717070792 +0000 UTC m=+3.520600809,LastTimestamp:2025-12-11 16:54:36.007453114 +0000 UTC m=+19.810983131,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:36 crc kubenswrapper[5129]: I1211 16:54:36.433364 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:36 crc kubenswrapper[5129]: E1211 16:54:36.556637 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:54:36 crc kubenswrapper[5129]: I1211 16:54:36.670602 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 11 16:54:36 crc kubenswrapper[5129]: I1211 16:54:36.672398 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc"} Dec 11 16:54:36 crc kubenswrapper[5129]: I1211 16:54:36.672668 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:36 crc kubenswrapper[5129]: I1211 16:54:36.673338 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:36 crc kubenswrapper[5129]: I1211 16:54:36.673380 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:36 crc kubenswrapper[5129]: I1211 16:54:36.673393 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:36 crc kubenswrapper[5129]: E1211 16:54:36.673907 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.435307 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.677567 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.678695 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.681020 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc" exitCode=255 Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.681095 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc"} Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.681169 5129 scope.go:117] "RemoveContainer" containerID="0ddbf587a32eb3e4021dcf69dd754989d8d18bcb17d92b21567e9baabbf01c01" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.681491 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.682443 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.682485 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.682528 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:37 crc kubenswrapper[5129]: E1211 16:54:37.683058 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:37 crc kubenswrapper[5129]: I1211 16:54:37.683499 5129 scope.go:117] "RemoveContainer" containerID="3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc" Dec 11 16:54:37 crc kubenswrapper[5129]: E1211 16:54:37.683841 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:54:37 crc kubenswrapper[5129]: E1211 16:54:37.692500 5129 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188037803336276d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:37.683763053 +0000 UTC m=+21.487293080,LastTimestamp:2025-12-11 16:54:37.683763053 +0000 UTC m=+21.487293080,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:38 crc kubenswrapper[5129]: I1211 16:54:38.431992 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:38 crc kubenswrapper[5129]: I1211 16:54:38.687995 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:54:39 crc kubenswrapper[5129]: E1211 16:54:39.064964 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.281466 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.282412 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.282530 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.282617 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.282705 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:39 crc kubenswrapper[5129]: E1211 16:54:39.290837 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.433125 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:39 crc kubenswrapper[5129]: E1211 16:54:39.508989 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.758964 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.759771 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.760539 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.760576 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.760586 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:39 crc kubenswrapper[5129]: E1211 16:54:39.760863 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:39 crc kubenswrapper[5129]: I1211 16:54:39.764101 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:54:39 crc kubenswrapper[5129]: E1211 16:54:39.846256 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.140303 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.140575 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.141445 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.141497 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.141527 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:40 crc kubenswrapper[5129]: E1211 16:54:40.141956 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.142270 5129 scope.go:117] "RemoveContainer" containerID="3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc" Dec 11 16:54:40 crc kubenswrapper[5129]: E1211 16:54:40.142490 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:54:40 crc kubenswrapper[5129]: E1211 16:54:40.147435 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188037803336276d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188037803336276d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:37.683763053 +0000 UTC m=+21.487293080,LastTimestamp:2025-12-11 16:54:40.142451727 +0000 UTC m=+23.945981764,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.432153 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.694505 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.695220 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.695257 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:40 crc kubenswrapper[5129]: I1211 16:54:40.695279 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:40 crc kubenswrapper[5129]: E1211 16:54:40.695625 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:40 crc kubenswrapper[5129]: E1211 16:54:40.949336 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:54:41 crc kubenswrapper[5129]: E1211 16:54:41.253557 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:54:41 crc kubenswrapper[5129]: I1211 16:54:41.433915 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:42 crc kubenswrapper[5129]: I1211 16:54:42.433661 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:43 crc kubenswrapper[5129]: I1211 16:54:43.432641 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:44 crc kubenswrapper[5129]: I1211 16:54:44.437185 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:45 crc kubenswrapper[5129]: I1211 16:54:45.436911 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:46 crc kubenswrapper[5129]: E1211 16:54:46.074288 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.291450 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.292789 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.292838 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.292852 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.292882 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:46 crc kubenswrapper[5129]: E1211 16:54:46.309877 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.434715 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:46 crc kubenswrapper[5129]: E1211 16:54:46.557057 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.673841 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.674204 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.675505 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.675610 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.675636 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:46 crc kubenswrapper[5129]: E1211 16:54:46.676211 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:46 crc kubenswrapper[5129]: I1211 16:54:46.676631 5129 scope.go:117] "RemoveContainer" containerID="3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc" Dec 11 16:54:46 crc kubenswrapper[5129]: E1211 16:54:46.676945 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:54:46 crc kubenswrapper[5129]: E1211 16:54:46.685631 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188037803336276d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188037803336276d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:37.683763053 +0000 UTC m=+21.487293080,LastTimestamp:2025-12-11 16:54:46.676895166 +0000 UTC m=+30.480425223,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:47 crc kubenswrapper[5129]: E1211 16:54:47.195761 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:54:47 crc kubenswrapper[5129]: I1211 16:54:47.435874 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:48 crc kubenswrapper[5129]: I1211 16:54:48.433117 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:48 crc kubenswrapper[5129]: E1211 16:54:48.765577 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:54:49 crc kubenswrapper[5129]: I1211 16:54:49.432659 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:50 crc kubenswrapper[5129]: I1211 16:54:50.435354 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:51 crc kubenswrapper[5129]: E1211 16:54:51.043852 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:54:51 crc kubenswrapper[5129]: I1211 16:54:51.431928 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:52 crc kubenswrapper[5129]: I1211 16:54:52.434209 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:53 crc kubenswrapper[5129]: E1211 16:54:53.084186 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:54:53 crc kubenswrapper[5129]: I1211 16:54:53.310335 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:53 crc kubenswrapper[5129]: I1211 16:54:53.311553 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:53 crc kubenswrapper[5129]: I1211 16:54:53.311645 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:53 crc kubenswrapper[5129]: I1211 16:54:53.311674 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:53 crc kubenswrapper[5129]: I1211 16:54:53.311770 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:54:53 crc kubenswrapper[5129]: E1211 16:54:53.327239 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:54:53 crc kubenswrapper[5129]: I1211 16:54:53.434416 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:54 crc kubenswrapper[5129]: I1211 16:54:54.434062 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:55 crc kubenswrapper[5129]: I1211 16:54:55.435285 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:56 crc kubenswrapper[5129]: I1211 16:54:56.434970 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:56 crc kubenswrapper[5129]: E1211 16:54:56.557739 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:54:57 crc kubenswrapper[5129]: I1211 16:54:57.433688 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:58 crc kubenswrapper[5129]: I1211 16:54:58.433956 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:58 crc kubenswrapper[5129]: I1211 16:54:58.520080 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:58 crc kubenswrapper[5129]: I1211 16:54:58.521589 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:58 crc kubenswrapper[5129]: I1211 16:54:58.521632 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:58 crc kubenswrapper[5129]: I1211 16:54:58.521646 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:58 crc kubenswrapper[5129]: E1211 16:54:58.522001 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:54:58 crc kubenswrapper[5129]: I1211 16:54:58.522307 5129 scope.go:117] "RemoveContainer" containerID="3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc" Dec 11 16:54:58 crc kubenswrapper[5129]: E1211 16:54:58.533370 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377bf3d4dd2f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bf3d4dd2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.440553263 +0000 UTC m=+3.244083280,LastTimestamp:2025-12-11 16:54:58.523998294 +0000 UTC m=+42.327528331,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:58 crc kubenswrapper[5129]: E1211 16:54:58.774539 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377c038cadfd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377c038cadfd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.704258045 +0000 UTC m=+3.507788062,LastTimestamp:2025-12-11 16:54:58.766633064 +0000 UTC m=+42.570163101,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:58 crc kubenswrapper[5129]: E1211 16:54:58.787256 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377c04502fc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377c04502fc8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.717070792 +0000 UTC m=+3.520600809,LastTimestamp:2025-12-11 16:54:58.782307934 +0000 UTC m=+42.585837961,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:54:58 crc kubenswrapper[5129]: E1211 16:54:58.976094 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:54:59 crc kubenswrapper[5129]: I1211 16:54:59.435196 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:54:59 crc kubenswrapper[5129]: I1211 16:54:59.750265 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:54:59 crc kubenswrapper[5129]: I1211 16:54:59.753216 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d"} Dec 11 16:54:59 crc kubenswrapper[5129]: I1211 16:54:59.753846 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:54:59 crc kubenswrapper[5129]: I1211 16:54:59.755709 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:54:59 crc kubenswrapper[5129]: I1211 16:54:59.755874 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:54:59 crc kubenswrapper[5129]: I1211 16:54:59.755897 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:54:59 crc kubenswrapper[5129]: E1211 16:54:59.756377 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:00 crc kubenswrapper[5129]: E1211 16:55:00.092618 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.328211 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.329803 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.329869 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.329890 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.329927 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:55:00 crc kubenswrapper[5129]: E1211 16:55:00.346734 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.464560 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.758588 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.759505 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.762354 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d" exitCode=255 Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.762467 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d"} Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.762808 5129 scope.go:117] "RemoveContainer" containerID="3e54868b6f4b943ebcb2aea59caa9daf5502f97075d73662c1c830ec2d2f6bcc" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.763318 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.767169 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.767326 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.767452 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:00 crc kubenswrapper[5129]: E1211 16:55:00.767960 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:00 crc kubenswrapper[5129]: I1211 16:55:00.768351 5129 scope.go:117] "RemoveContainer" containerID="e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d" Dec 11 16:55:00 crc kubenswrapper[5129]: E1211 16:55:00.768664 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:00 crc kubenswrapper[5129]: E1211 16:55:00.777667 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188037803336276d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188037803336276d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:37.683763053 +0000 UTC m=+21.487293080,LastTimestamp:2025-12-11 16:55:00.768632105 +0000 UTC m=+44.572162122,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:55:01 crc kubenswrapper[5129]: I1211 16:55:01.434040 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:01 crc kubenswrapper[5129]: I1211 16:55:01.767700 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:55:02 crc kubenswrapper[5129]: I1211 16:55:02.435294 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:03 crc kubenswrapper[5129]: I1211 16:55:03.432745 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:03 crc kubenswrapper[5129]: I1211 16:55:03.907899 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:55:03 crc kubenswrapper[5129]: I1211 16:55:03.908195 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:03 crc kubenswrapper[5129]: I1211 16:55:03.909485 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:03 crc kubenswrapper[5129]: I1211 16:55:03.909583 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:03 crc kubenswrapper[5129]: I1211 16:55:03.909603 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:03 crc kubenswrapper[5129]: E1211 16:55:03.910122 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:04 crc kubenswrapper[5129]: I1211 16:55:04.434426 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:05 crc kubenswrapper[5129]: I1211 16:55:05.432159 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:05 crc kubenswrapper[5129]: E1211 16:55:05.931877 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:55:06 crc kubenswrapper[5129]: I1211 16:55:06.431789 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:06 crc kubenswrapper[5129]: E1211 16:55:06.559337 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:55:07 crc kubenswrapper[5129]: E1211 16:55:07.101575 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:55:07 crc kubenswrapper[5129]: I1211 16:55:07.347312 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:07 crc kubenswrapper[5129]: I1211 16:55:07.348589 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:07 crc kubenswrapper[5129]: I1211 16:55:07.348684 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:07 crc kubenswrapper[5129]: I1211 16:55:07.348716 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:07 crc kubenswrapper[5129]: I1211 16:55:07.348787 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:55:07 crc kubenswrapper[5129]: E1211 16:55:07.365643 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:55:07 crc kubenswrapper[5129]: I1211 16:55:07.435015 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:08 crc kubenswrapper[5129]: I1211 16:55:08.434738 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:08 crc kubenswrapper[5129]: E1211 16:55:08.434897 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:55:09 crc kubenswrapper[5129]: I1211 16:55:09.433834 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:09 crc kubenswrapper[5129]: I1211 16:55:09.755009 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:55:09 crc kubenswrapper[5129]: I1211 16:55:09.755349 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:09 crc kubenswrapper[5129]: I1211 16:55:09.756450 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:09 crc kubenswrapper[5129]: I1211 16:55:09.756494 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:09 crc kubenswrapper[5129]: I1211 16:55:09.756573 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:09 crc kubenswrapper[5129]: E1211 16:55:09.757095 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:09 crc kubenswrapper[5129]: I1211 16:55:09.757394 5129 scope.go:117] "RemoveContainer" containerID="e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d" Dec 11 16:55:09 crc kubenswrapper[5129]: E1211 16:55:09.757641 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:09 crc kubenswrapper[5129]: E1211 16:55:09.766029 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188037803336276d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188037803336276d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:37.683763053 +0000 UTC m=+21.487293080,LastTimestamp:2025-12-11 16:55:09.757602606 +0000 UTC m=+53.561132633,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:55:10 crc kubenswrapper[5129]: I1211 16:55:10.139679 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:55:10 crc kubenswrapper[5129]: I1211 16:55:10.140222 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:10 crc kubenswrapper[5129]: I1211 16:55:10.141389 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:10 crc kubenswrapper[5129]: I1211 16:55:10.141447 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:10 crc kubenswrapper[5129]: I1211 16:55:10.141467 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:10 crc kubenswrapper[5129]: E1211 16:55:10.142145 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:10 crc kubenswrapper[5129]: I1211 16:55:10.142586 5129 scope.go:117] "RemoveContainer" containerID="e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d" Dec 11 16:55:10 crc kubenswrapper[5129]: E1211 16:55:10.142887 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:10 crc kubenswrapper[5129]: E1211 16:55:10.150880 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188037803336276d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188037803336276d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:37.683763053 +0000 UTC m=+21.487293080,LastTimestamp:2025-12-11 16:55:10.142847861 +0000 UTC m=+53.946377908,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:55:10 crc kubenswrapper[5129]: I1211 16:55:10.435431 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:11 crc kubenswrapper[5129]: I1211 16:55:11.435056 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:12 crc kubenswrapper[5129]: I1211 16:55:12.434947 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:13 crc kubenswrapper[5129]: E1211 16:55:13.041499 5129 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:55:13 crc kubenswrapper[5129]: I1211 16:55:13.434762 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:14 crc kubenswrapper[5129]: E1211 16:55:14.110265 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:55:14 crc kubenswrapper[5129]: I1211 16:55:14.366783 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:14 crc kubenswrapper[5129]: I1211 16:55:14.368471 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:14 crc kubenswrapper[5129]: I1211 16:55:14.368578 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:14 crc kubenswrapper[5129]: I1211 16:55:14.368600 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:14 crc kubenswrapper[5129]: I1211 16:55:14.368644 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:55:14 crc kubenswrapper[5129]: E1211 16:55:14.382592 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:55:14 crc kubenswrapper[5129]: I1211 16:55:14.438115 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:15 crc kubenswrapper[5129]: I1211 16:55:15.432098 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:16 crc kubenswrapper[5129]: I1211 16:55:16.434162 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:16 crc kubenswrapper[5129]: E1211 16:55:16.559876 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:55:17 crc kubenswrapper[5129]: I1211 16:55:17.434967 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:18 crc kubenswrapper[5129]: I1211 16:55:18.435712 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:19 crc kubenswrapper[5129]: I1211 16:55:19.432865 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.431991 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.520341 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.521626 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.521677 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.521696 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:20 crc kubenswrapper[5129]: E1211 16:55:20.522080 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.522342 5129 scope.go:117] "RemoveContainer" containerID="e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d" Dec 11 16:55:20 crc kubenswrapper[5129]: E1211 16:55:20.566432 5129 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1880377bf3d4dd2f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880377bf3d4dd2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:54:19.440553263 +0000 UTC m=+3.244083280,LastTimestamp:2025-12-11 16:55:20.561310154 +0000 UTC m=+64.364840191,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.820521 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.822493 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45"} Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.822678 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.823607 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.823671 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:20 crc kubenswrapper[5129]: I1211 16:55:20.823692 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:20 crc kubenswrapper[5129]: E1211 16:55:20.824270 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:21 crc kubenswrapper[5129]: E1211 16:55:21.113768 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.382967 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.384579 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.384669 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.384691 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.384732 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:55:21 crc kubenswrapper[5129]: E1211 16:55:21.395831 5129 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.430828 5129 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.596301 5129 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-2lx9c" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.603553 5129 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-2lx9c" Dec 11 16:55:21 crc kubenswrapper[5129]: I1211 16:55:21.640923 5129 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.335837 5129 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.605598 5129 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-10 16:50:21 +0000 UTC" deadline="2026-01-04 23:07:07.31453333 +0000 UTC" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.606244 5129 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="582h11m44.708296442s" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.830255 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.831456 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.833799 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" exitCode=255 Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.833888 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45"} Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.833964 5129 scope.go:117] "RemoveContainer" containerID="e4a58b9813f7c0307177adbe269dcee0d005ccfc0c249a0a6d90bc3bbf45f16d" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.834214 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.835196 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.835238 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.835255 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:22 crc kubenswrapper[5129]: E1211 16:55:22.835968 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:22 crc kubenswrapper[5129]: I1211 16:55:22.836306 5129 scope.go:117] "RemoveContainer" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" Dec 11 16:55:22 crc kubenswrapper[5129]: E1211 16:55:22.836574 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:23 crc kubenswrapper[5129]: I1211 16:55:23.837374 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:55:26 crc kubenswrapper[5129]: E1211 16:55:26.560807 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.395992 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.398246 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.398341 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.398362 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.398658 5129 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.411507 5129 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.412000 5129 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.412038 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.416834 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.416913 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.416939 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.416971 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.416995 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:28Z","lastTransitionTime":"2025-12-11T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.439661 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.451820 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.451901 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.451926 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.451960 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.451983 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:28Z","lastTransitionTime":"2025-12-11T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.469506 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.483333 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.483360 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.483369 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.483382 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.483391 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:28Z","lastTransitionTime":"2025-12-11T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.496926 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.503853 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.503921 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.503939 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.503965 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:28 crc kubenswrapper[5129]: I1211 16:55:28.503983 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:28Z","lastTransitionTime":"2025-12-11T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.515087 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.515328 5129 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.515366 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.615860 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.716374 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.817201 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:28 crc kubenswrapper[5129]: E1211 16:55:28.918043 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.019159 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.120553 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.221047 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.322101 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.422777 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.523666 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.624475 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.724845 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.824974 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:29 crc kubenswrapper[5129]: E1211 16:55:29.925290 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.026220 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.126763 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.138948 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.139211 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.139962 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.140008 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.140026 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.140594 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.140935 5129 scope.go:117] "RemoveContainer" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.141163 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.226875 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.327495 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.427793 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.528319 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.628629 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.729618 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.823759 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.829982 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.857445 5129 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.858012 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.858040 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.858049 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.858418 5129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:55:30 crc kubenswrapper[5129]: I1211 16:55:30.858660 5129 scope.go:117] "RemoveContainer" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.858871 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:30 crc kubenswrapper[5129]: E1211 16:55:30.930337 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.030588 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.131642 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.231856 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.332322 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.432927 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.533145 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.633633 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.734508 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.834893 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:31 crc kubenswrapper[5129]: E1211 16:55:31.935907 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.036968 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.137638 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.238479 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.338853 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.439980 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.540221 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.641390 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.742426 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.842501 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:32 crc kubenswrapper[5129]: E1211 16:55:32.943265 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.043716 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.144005 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.244889 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.345432 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.445822 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.546565 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.647450 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.747593 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.848579 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:33 crc kubenswrapper[5129]: E1211 16:55:33.948988 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.050194 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.151346 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.251739 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.352150 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.452291 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.553353 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.653904 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.755019 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.855191 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:34 crc kubenswrapper[5129]: E1211 16:55:34.955323 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.056390 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.157370 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.258349 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.359403 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.460429 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.561500 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.661982 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.762479 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.862836 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:35 crc kubenswrapper[5129]: E1211 16:55:35.963274 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.063892 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.164610 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.265283 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.365871 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.466495 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.561694 5129 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.566757 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.667825 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.768311 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.869411 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:36 crc kubenswrapper[5129]: E1211 16:55:36.970116 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.071413 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.172623 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.273695 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.373899 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.474338 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.574673 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.675313 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.776331 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.877499 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:37 crc kubenswrapper[5129]: E1211 16:55:37.977972 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.078884 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.179406 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.279875 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.380734 5129 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.402122 5129 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.441778 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.462688 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.482811 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.482863 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.482878 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.482895 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.482908 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.536811 5129 apiserver.go:52] "Watching apiserver" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.545636 5129 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.546272 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-m95zr","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc","openshift-machine-config-operator/machine-config-daemon-9gtgq","openshift-multus/network-metrics-daemon-fptr2","openshift-network-diagnostics/network-check-target-fhkjl","openshift-image-registry/node-ca-t8chw","openshift-ovn-kubernetes/ovnkube-node-2khpc","openshift-dns/node-resolver-spxfg","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-additional-cni-plugins-sdzh7","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6"] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.548026 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.549728 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.549849 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.550458 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.551483 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.554169 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.554216 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.554574 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.554740 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.554949 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.562032 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.562151 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.562165 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.562452 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.562822 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.562844 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.562908 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.562997 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.565797 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.566758 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.566805 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.566825 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.566849 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.566867 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.570770 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.570977 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.574042 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.574509 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.574942 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.575359 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.575935 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.576915 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.577322 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.577554 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.578871 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.578999 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.582950 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.583253 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.586908 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.587210 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.587371 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.587978 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.587856 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.588490 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.591074 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.591587 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.593396 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.593414 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.594042 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.594586 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.595573 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.595754 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.595774 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.595809 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.595833 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.596584 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.599363 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.604982 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.606509 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.606939 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.606979 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.608105 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.608363 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.612058 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.612215 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.612313 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.612623 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.612834 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.615730 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.616399 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.619161 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.619188 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-spxfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0974084-197d-495d-b227-4ea7d61426c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lw8m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-spxfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.624608 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.624674 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.624687 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.624704 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.624717 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.632299 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.636312 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.640788 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.640875 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.640894 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.640916 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.640932 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.643214 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.643672 5129 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.652542 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.654813 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.654870 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.654903 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.654929 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.654958 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.654991 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655054 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655088 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655119 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655149 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655179 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655210 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655237 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655263 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655302 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655311 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656781 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656827 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656848 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656873 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655642 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656893 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656025 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656613 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655776 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656754 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656871 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.656996 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.655339 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657110 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657290 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657310 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657317 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657367 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657442 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657465 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657488 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657488 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657534 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657566 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657562 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657596 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657684 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657749 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657806 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657859 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.657969 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.658084 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.658205 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.658374 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.658586 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.658709 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.658824 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.658988 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659106 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659334 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659507 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659679 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659733 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659788 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659853 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659908 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.659963 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660017 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660112 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660170 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660227 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660293 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660345 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660397 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660452 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660547 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660610 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660669 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660724 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660778 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660837 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660896 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.660954 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661013 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661072 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661128 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661181 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661234 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661291 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661344 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661401 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661453 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661506 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661602 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661661 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661788 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.661969 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.662103 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.662294 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.662496 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.662722 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.662893 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663014 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663089 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663147 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663200 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663267 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663324 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663377 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663440 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663493 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663592 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663664 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663724 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663780 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663841 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663898 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.663971 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664027 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664093 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664146 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664198 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664252 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664309 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664366 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664422 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664477 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664571 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664637 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664694 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664762 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664820 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664878 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664936 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.664997 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665059 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665120 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665195 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665258 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665316 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665383 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665446 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665505 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665603 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665676 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665739 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665801 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665863 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.665929 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666005 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666068 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666138 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666200 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666259 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666320 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666387 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666478 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666577 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666641 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666708 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666776 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666837 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666897 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.666960 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667031 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667096 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667167 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667230 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667298 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667364 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667428 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667491 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667593 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.667655 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668588 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668647 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668453 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668712 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668751 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668785 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668793 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668818 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.668907 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.669134 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.669202 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.669243 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.669578 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.669467 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.669616 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.669808 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670016 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670131 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670174 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670179 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670214 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670242 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670241 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670461 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670469 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670496 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.670549 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671062 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671121 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671588 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671640 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671661 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671707 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671754 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671796 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671846 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671887 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671927 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.671969 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672055 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672100 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672142 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672183 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672204 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672223 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672272 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672311 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672353 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672396 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672433 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672470 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672536 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672577 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672618 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672654 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672696 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672734 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672774 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672813 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672856 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672895 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672900 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.672941 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673003 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673055 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673091 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673121 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673160 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673189 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673216 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673254 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673290 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673282 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.673387 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:55:39.173364137 +0000 UTC m=+82.976894164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673613 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.673662 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674078 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674118 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674147 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674184 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674216 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674219 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674246 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674305 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674331 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674338 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674355 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674391 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674479 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674560 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.674767 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675071 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675257 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675340 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675354 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675543 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675693 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675721 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675903 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675936 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675999 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.675670 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676111 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676187 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676241 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676330 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676338 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676354 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676360 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676370 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676424 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676450 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676472 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676710 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676737 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-cni-bin\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676758 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdznl\" (UniqueName: \"kubernetes.io/projected/5313889a-2681-4f68-96f8-d5dfea8d3a8b-kube-api-access-vdznl\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676727 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676779 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-cnibin\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676922 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-cni-binary-copy\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.676972 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-slash\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.677017 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-log-socket\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.677061 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.677092 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-netd\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.677375 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.677503 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.677871 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678270 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-config\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678389 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678454 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovn-node-metrics-cert\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678468 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9gtgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678507 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jpwl\" (UniqueName: \"kubernetes.io/projected/8bfafb25-f61d-4c63-8e1e-9cba0778559a-kube-api-access-2jpwl\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678696 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678734 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678760 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5313889a-2681-4f68-96f8-d5dfea8d3a8b-cni-binary-copy\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678788 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-k8s-cni-cncf-io\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678815 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-proxy-tls\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678840 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-system-cni-dir\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678880 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678909 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnwmv\" (UniqueName: \"kubernetes.io/projected/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-kube-api-access-pnwmv\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678925 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678942 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.678987 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679032 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679058 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679107 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679171 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-conf-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679221 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a0974084-197d-495d-b227-4ea7d61426c6-tmp-dir\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679268 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-script-lib\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679331 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-cni-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679377 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-socket-dir-parent\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679429 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-kubelet\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679474 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-hostroot\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679577 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679591 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4nql\" (UniqueName: \"kubernetes.io/projected/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-kube-api-access-t4nql\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679659 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679716 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679723 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679812 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679866 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679826 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss94k\" (UniqueName: \"kubernetes.io/projected/15d52990-0733-45fe-ac96-429a9503dbab-kube-api-access-ss94k\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679924 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-system-cni-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679955 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.679979 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680007 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680076 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-etc-kubernetes\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680088 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680073 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680102 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680246 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680774 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680838 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680855 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.680851 5129 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680898 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680964 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.681122 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:39.181100467 +0000 UTC m=+82.984630494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.681287 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.681312 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.681538 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.681809 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.681873 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.682247 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.682378 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.683347 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.682726 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.683927 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684071 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684251 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684565 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684695 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685334 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684684 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684786 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684784 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684872 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680839 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684975 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685398 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685433 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.684910 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4lj\" (UniqueName: \"kubernetes.io/projected/0e5c4751-c0b7-476b-a553-042ed9d66177-kube-api-access-5r4lj\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685026 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685077 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685098 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685139 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685262 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685295 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685302 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.680888 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685562 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45767eb3-dd9a-4116-a1d6-a0e107c053ac-host\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685674 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-var-lib-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685767 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685849 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-cni-multus\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685918 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.685930 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a0974084-197d-495d-b227-4ea7d61426c6-hosts-file\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.686320 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.686464 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.686505 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.686978 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687159 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.686655 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-etc-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687231 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687259 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-ovn\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687281 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-node-log\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687315 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687347 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-multus-certs\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687369 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687535 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687590 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687787 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687865 5129 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.687933 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688256 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688315 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688378 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688409 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688439 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688465 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lw8m\" (UniqueName: \"kubernetes.io/projected/a0974084-197d-495d-b227-4ea7d61426c6-kube-api-access-2lw8m\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688491 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-rootfs\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688544 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-mcd-auth-proxy-config\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688570 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45767eb3-dd9a-4116-a1d6-a0e107c053ac-serviceca\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688594 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688631 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688655 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-netns\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688677 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-bin\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688683 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688700 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-env-overrides\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688723 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688749 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-cnibin\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688772 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-netns\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688793 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-daemon-config\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688815 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-os-release\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688839 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-kubelet\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688860 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-systemd-units\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688881 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-systemd\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688902 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-os-release\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688924 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrc4\" (UniqueName: \"kubernetes.io/projected/45767eb3-dd9a-4116-a1d6-a0e107c053ac-kube-api-access-6xrc4\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688945 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.688999 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689058 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689075 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689088 5129 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689101 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689105 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689548 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689114 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689607 5129 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689625 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689673 5129 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689604 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689693 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689708 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689725 5129 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689740 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689754 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689768 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689782 5129 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689796 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689810 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689824 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689839 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689853 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689867 5129 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689880 5129 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689877 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689893 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689971 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689977 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.689999 5129 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690018 5129 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690036 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690052 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690067 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690082 5129 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690099 5129 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690113 5129 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690126 5129 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690140 5129 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690154 5129 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690167 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690180 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690194 5129 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690207 5129 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690218 5129 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690230 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690242 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690254 5129 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690266 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690279 5129 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690292 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690303 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690317 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690330 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690342 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690355 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690368 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690380 5129 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690392 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690442 5129 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690456 5129 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690469 5129 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690489 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690504 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690531 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690546 5129 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690560 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690573 5129 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690589 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690602 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690615 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690628 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690642 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690654 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690667 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690680 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690693 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690706 5129 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690720 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690733 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690746 5129 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690759 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690771 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690783 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690799 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690812 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690825 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690840 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690851 5129 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690864 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690876 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690889 5129 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690902 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690915 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690927 5129 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690940 5129 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690951 5129 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690965 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690976 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690988 5129 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691000 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691014 5129 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691029 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691044 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691056 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691069 5129 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691080 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691093 5129 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691105 5129 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691120 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691133 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691145 5129 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691157 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691169 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691180 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691192 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691207 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691220 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690568 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.690777 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691243 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.691669 5129 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691711 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.691740 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:39.191719467 +0000 UTC m=+82.995249484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.691819 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.692104 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.692440 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.692492 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.692728 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.692789 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.692997 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.693530 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.693712 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.693826 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.693839 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.694064 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.694214 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.694255 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.694530 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.694743 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.694860 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.695004 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.695241 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.695259 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.695736 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.695920 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.695978 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.696164 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.699939 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.687103 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.700745 5129 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.701072 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.701740 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.701892 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.702032 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.702239 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.702881 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.704728 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fptr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d52990-0733-45fe-ac96-429a9503dbab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fptr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.705158 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.705286 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.705367 5129 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.705737 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.706005 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:39.205978239 +0000 UTC m=+83.009508266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.707474 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.709469 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.709836 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.709923 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.710216 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.710721 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.710775 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.710668 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.710868 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.710993 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.711490 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.711720 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.712224 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.712327 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.712406 5129 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.712548 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:39.212528421 +0000 UTC m=+83.016058438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.712643 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.712406 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.712748 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.712846 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.713154 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.713399 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.713992 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.715103 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.715249 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.715329 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.715666 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.716414 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.716600 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.716660 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.716833 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.716944 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.716995 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717048 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717246 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717399 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717437 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717451 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717467 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717480 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717793 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.717848 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.718226 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.718356 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.718377 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.718474 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.718570 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.718908 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.718936 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719157 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719307 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719349 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719412 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719414 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719588 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719597 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721077 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721122 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721148 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.719624 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721678 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721624 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721697 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721744 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721856 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.721874 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722123 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722151 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722279 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722376 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722537 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722591 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722665 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722744 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.722762 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.723409 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.723641 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.723732 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.731309 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.737770 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.739971 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.748125 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.748358 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.754830 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-spxfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0974084-197d-495d-b227-4ea7d61426c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lw8m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-spxfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.756194 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.761311 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.763545 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.763755 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9gtgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.764911 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.772170 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.777545 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fptr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d52990-0733-45fe-ac96-429a9503dbab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fptr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.791975 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-systemd-units\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792016 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-systemd\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792056 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-os-release\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792088 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-systemd-units\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792141 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-systemd\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792200 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrc4\" (UniqueName: \"kubernetes.io/projected/45767eb3-dd9a-4116-a1d6-a0e107c053ac-kube-api-access-6xrc4\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792223 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792242 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-cni-bin\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792257 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdznl\" (UniqueName: \"kubernetes.io/projected/5313889a-2681-4f68-96f8-d5dfea8d3a8b-kube-api-access-vdznl\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792273 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-cnibin\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792301 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-cni-binary-copy\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792328 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-slash\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792348 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-log-socket\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792372 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-netd\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792393 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-config\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792414 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovn-node-metrics-cert\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792435 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2jpwl\" (UniqueName: \"kubernetes.io/projected/8bfafb25-f61d-4c63-8e1e-9cba0778559a-kube-api-access-2jpwl\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792486 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-os-release\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792503 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792560 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5313889a-2681-4f68-96f8-d5dfea8d3a8b-cni-binary-copy\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792575 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792584 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-k8s-cni-cncf-io\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792589 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-cnibin\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792612 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-proxy-tls\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792625 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-netd\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792633 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-system-cni-dir\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792654 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792664 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-slash\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792678 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pnwmv\" (UniqueName: \"kubernetes.io/projected/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-kube-api-access-pnwmv\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792693 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-k8s-cni-cncf-io\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792702 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792724 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-conf-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792731 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792743 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a0974084-197d-495d-b227-4ea7d61426c6-tmp-dir\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792765 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-script-lib\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792784 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-cni-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792804 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-socket-dir-parent\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792822 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-kubelet\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792840 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-hostroot\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792861 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4nql\" (UniqueName: \"kubernetes.io/projected/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-kube-api-access-t4nql\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792890 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792907 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ss94k\" (UniqueName: \"kubernetes.io/projected/15d52990-0733-45fe-ac96-429a9503dbab-kube-api-access-ss94k\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792923 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-system-cni-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792950 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792965 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-etc-kubernetes\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792982 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793003 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4lj\" (UniqueName: \"kubernetes.io/projected/0e5c4751-c0b7-476b-a553-042ed9d66177-kube-api-access-5r4lj\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793021 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45767eb3-dd9a-4116-a1d6-a0e107c053ac-host\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793037 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-var-lib-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793056 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-cni-multus\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793075 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a0974084-197d-495d-b227-4ea7d61426c6-hosts-file\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793094 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-etc-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793115 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793132 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-ovn\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793149 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-node-log\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793168 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-multus-certs\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793186 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793216 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793234 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2lw8m\" (UniqueName: \"kubernetes.io/projected/a0974084-197d-495d-b227-4ea7d61426c6-kube-api-access-2lw8m\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793254 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-rootfs\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793275 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-mcd-auth-proxy-config\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793293 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45767eb3-dd9a-4116-a1d6-a0e107c053ac-serviceca\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793311 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793339 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-netns\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793348 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-config\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793348 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5313889a-2681-4f68-96f8-d5dfea8d3a8b-cni-binary-copy\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.792559 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-log-socket\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793356 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-bin\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793418 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-env-overrides\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.793432 5129 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793442 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793469 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-cnibin\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.793478 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs podName:15d52990-0733-45fe-ac96-429a9503dbab nodeName:}" failed. No retries permitted until 2025-12-11 16:55:39.29346517 +0000 UTC m=+83.096995187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs") pod "network-metrics-daemon-fptr2" (UID: "15d52990-0733-45fe-ac96-429a9503dbab") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793385 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-bin\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793496 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-netns\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793527 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-daemon-config\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793392 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-cni-bin\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793876 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-env-overrides\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.793797 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfafb25-f61d-4c63-8e1e-9cba0778559a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2khpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794080 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-multus-certs\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794164 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-script-lib\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794225 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-system-cni-dir\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794471 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794470 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-etc-kubernetes\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794623 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-cni-multus\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794659 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794647 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-ovn\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794725 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-etc-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794858 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-node-log\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794851 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a0974084-197d-495d-b227-4ea7d61426c6-hosts-file\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794947 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-cni-binary-copy\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.794982 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-conf-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795011 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-var-lib-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795023 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-cnibin\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795260 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-system-cni-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795291 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795341 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-socket-dir-parent\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795389 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-var-lib-kubelet\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795414 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795446 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-cni-dir\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795441 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.795827 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-rootfs\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796036 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45767eb3-dd9a-4116-a1d6-a0e107c053ac-host\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796250 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/0e5c4751-c0b7-476b-a553-042ed9d66177-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796322 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-netns\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796369 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-openvswitch\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796403 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-host-run-netns\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796438 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-os-release\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796466 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-kubelet\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796577 5129 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796593 5129 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796605 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796617 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796628 5129 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796640 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796657 5129 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796670 5129 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796685 5129 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796696 5129 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796709 5129 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796722 5129 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796733 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796745 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796757 5129 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796768 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796779 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796791 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796802 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796813 5129 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796824 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796834 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796847 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796858 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796872 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796883 5129 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796883 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5313889a-2681-4f68-96f8-d5dfea8d3a8b-hostroot\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796894 5129 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796813 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45767eb3-dd9a-4116-a1d6-a0e107c053ac-serviceca\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796905 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e5c4751-c0b7-476b-a553-042ed9d66177-os-release\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796929 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796942 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-kubelet\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796948 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796970 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796983 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.796996 5129 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797007 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797019 5129 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797030 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797041 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797051 5129 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797063 5129 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797074 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797085 5129 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797096 5129 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797108 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797119 5129 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797129 5129 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797139 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797151 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797164 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797175 5129 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797188 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797200 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797212 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797224 5129 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797237 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797251 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797262 5129 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797273 5129 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797284 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797295 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797308 5129 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797320 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797331 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797341 5129 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797351 5129 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797362 5129 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797373 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797385 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797395 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797406 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797417 5129 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797428 5129 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797438 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797448 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797458 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797469 5129 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797482 5129 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797493 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797504 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797533 5129 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797544 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797555 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797568 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797578 5129 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797588 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797599 5129 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797609 5129 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797619 5129 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797631 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797643 5129 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797656 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797669 5129 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797682 5129 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797694 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797149 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovn-node-metrics-cert\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797707 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797722 5129 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797734 5129 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797748 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797761 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797776 5129 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797790 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797805 5129 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797821 5129 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797837 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797850 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797860 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797873 5129 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797885 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797901 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797918 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.797939 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.798435 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a0974084-197d-495d-b227-4ea7d61426c6-tmp-dir\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.798473 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-mcd-auth-proxy-config\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.807380 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-m95zr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5313889a-2681-4f68-96f8-d5dfea8d3a8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdznl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m95zr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.808962 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-proxy-tls\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.809285 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.810213 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4nql\" (UniqueName: \"kubernetes.io/projected/b9f3b447-4c51-44f3-9ade-21b54c3a6daf-kube-api-access-t4nql\") pod \"machine-config-daemon-9gtgq\" (UID: \"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\") " pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.810297 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5313889a-2681-4f68-96f8-d5dfea8d3a8b-multus-daemon-config\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.813109 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.813421 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jpwl\" (UniqueName: \"kubernetes.io/projected/8bfafb25-f61d-4c63-8e1e-9cba0778559a-kube-api-access-2jpwl\") pod \"ovnkube-node-2khpc\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.814115 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdznl\" (UniqueName: \"kubernetes.io/projected/5313889a-2681-4f68-96f8-d5dfea8d3a8b-kube-api-access-vdznl\") pod \"multus-m95zr\" (UID: \"5313889a-2681-4f68-96f8-d5dfea8d3a8b\") " pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.815148 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lw8m\" (UniqueName: \"kubernetes.io/projected/a0974084-197d-495d-b227-4ea7d61426c6-kube-api-access-2lw8m\") pod \"node-resolver-spxfg\" (UID: \"a0974084-197d-495d-b227-4ea7d61426c6\") " pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.815330 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss94k\" (UniqueName: \"kubernetes.io/projected/15d52990-0733-45fe-ac96-429a9503dbab-kube-api-access-ss94k\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.815999 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrc4\" (UniqueName: \"kubernetes.io/projected/45767eb3-dd9a-4116-a1d6-a0e107c053ac-kube-api-access-6xrc4\") pod \"node-ca-t8chw\" (UID: \"45767eb3-dd9a-4116-a1d6-a0e107c053ac\") " pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.818756 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4lj\" (UniqueName: \"kubernetes.io/projected/0e5c4751-c0b7-476b-a553-042ed9d66177-kube-api-access-5r4lj\") pod \"multus-additional-cni-plugins-sdzh7\" (UID: \"0e5c4751-c0b7-476b-a553-042ed9d66177\") " pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.819019 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5c4751-c0b7-476b-a553-042ed9d66177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.819902 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.819943 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.819956 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.819973 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.819982 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.825127 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnwmv\" (UniqueName: \"kubernetes.io/projected/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-kube-api-access-pnwmv\") pod \"ovnkube-control-plane-57b78d8988-h4rqc\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.828333 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-h4rqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.837795 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"634ba037-86a0-4350-86e6-ff15f9395f74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32ffe9b5be1ad35ddd9febeb1f98d097ff984ae3bd337ebbbe14d99170d8489a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4c5571003912b3a12d9b8e7230f22fd588dae784e943736ea11373f2dcd2baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df646ec52f7a1cf49d9303ebccd8de6422fa94c4907a596b63278216fc07ebcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.844798 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf7945d-7e6c-4b24-854b-268b781347c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8faab81a2b9f03a74368e14568cc8b7b928132eef181ee297d2fbad86f5fb194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.853706 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.862611 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.864289 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.864926 5129 scope.go:117] "RemoveContainer" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.865090 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.869592 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45767eb3-dd9a-4116-a1d6-a0e107c053ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xrc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.879717 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb249f8f-9a28-4c68-91ed-0a729945afdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d512a17000ca709c3c084a435e8fcbecf28038516c0a11190f2385d68ae16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5bdd0c143fa7e8812638159329a3e152d6d88c66c8e0fb790ae35c0ded8176e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5c934b2c22637164c8d767636f1daecb334588708bfe1bad7c8292922847f7ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.885119 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.889586 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.898744 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.903459 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:38 crc kubenswrapper[5129]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 11 16:55:38 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:38 crc kubenswrapper[5129]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 11 16:55:38 crc kubenswrapper[5129]: source /etc/kubernetes/apiserver-url.env Dec 11 16:55:38 crc kubenswrapper[5129]: else Dec 11 16:55:38 crc kubenswrapper[5129]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 11 16:55:38 crc kubenswrapper[5129]: exit 1 Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 11 16:55:38 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:38 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.904627 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.907774 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.919055 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:55:38 crc kubenswrapper[5129]: W1211 16:55:38.919496 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-ab94024b72fbe859ce9bc224e53798c718c2bb2b6d395dc9ada679002987d775 WatchSource:0}: Error finding container ab94024b72fbe859ce9bc224e53798c718c2bb2b6d395dc9ada679002987d775: Status 404 returned error can't find the container with id ab94024b72fbe859ce9bc224e53798c718c2bb2b6d395dc9ada679002987d775 Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.922083 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.922259 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.922354 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.922483 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.922599 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:38Z","lastTransitionTime":"2025-12-11T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.924151 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:38 crc kubenswrapper[5129]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 11 16:55:38 crc kubenswrapper[5129]: if [[ -f "/env/_master" ]]; then Dec 11 16:55:38 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:38 crc kubenswrapper[5129]: source "/env/_master" Dec 11 16:55:38 crc kubenswrapper[5129]: set +o allexport Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 11 16:55:38 crc kubenswrapper[5129]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 11 16:55:38 crc kubenswrapper[5129]: ho_enable="--enable-hybrid-overlay" Dec 11 16:55:38 crc kubenswrapper[5129]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 11 16:55:38 crc kubenswrapper[5129]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 11 16:55:38 crc kubenswrapper[5129]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 11 16:55:38 crc kubenswrapper[5129]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 11 16:55:38 crc kubenswrapper[5129]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 11 16:55:38 crc kubenswrapper[5129]: --webhook-host=127.0.0.1 \ Dec 11 16:55:38 crc kubenswrapper[5129]: --webhook-port=9743 \ Dec 11 16:55:38 crc kubenswrapper[5129]: ${ho_enable} \ Dec 11 16:55:38 crc kubenswrapper[5129]: --enable-interconnect \ Dec 11 16:55:38 crc kubenswrapper[5129]: --disable-approver \ Dec 11 16:55:38 crc kubenswrapper[5129]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 11 16:55:38 crc kubenswrapper[5129]: --wait-for-kubernetes-api=200s \ Dec 11 16:55:38 crc kubenswrapper[5129]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 11 16:55:38 crc kubenswrapper[5129]: --loglevel="${LOGLEVEL}" Dec 11 16:55:38 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:38 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.927215 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:38 crc kubenswrapper[5129]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 11 16:55:38 crc kubenswrapper[5129]: if [[ -f "/env/_master" ]]; then Dec 11 16:55:38 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:38 crc kubenswrapper[5129]: source "/env/_master" Dec 11 16:55:38 crc kubenswrapper[5129]: set +o allexport Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 11 16:55:38 crc kubenswrapper[5129]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 11 16:55:38 crc kubenswrapper[5129]: --disable-webhook \ Dec 11 16:55:38 crc kubenswrapper[5129]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 11 16:55:38 crc kubenswrapper[5129]: --loglevel="${LOGLEVEL}" Dec 11 16:55:38 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:38 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.929341 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 11 16:55:38 crc kubenswrapper[5129]: W1211 16:55:38.930315 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-9359c9fbdc006142894fe3656a576c65244eb8dfb5332b272290ab8e5ff3e717 WatchSource:0}: Error finding container 9359c9fbdc006142894fe3656a576c65244eb8dfb5332b272290ab8e5ff3e717: Status 404 returned error can't find the container with id 9359c9fbdc006142894fe3656a576c65244eb8dfb5332b272290ab8e5ff3e717 Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.933636 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.934805 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.935730 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-spxfg" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.945763 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:55:38 crc kubenswrapper[5129]: W1211 16:55:38.946257 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0974084_197d_495d_b227_4ea7d61426c6.slice/crio-8eab0a96b438c3ce83c1d761c8dd609f186ebfab1dd2926b210267e6390119f5 WatchSource:0}: Error finding container 8eab0a96b438c3ce83c1d761c8dd609f186ebfab1dd2926b210267e6390119f5: Status 404 returned error can't find the container with id 8eab0a96b438c3ce83c1d761c8dd609f186ebfab1dd2926b210267e6390119f5 Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.950660 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:38 crc kubenswrapper[5129]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 11 16:55:38 crc kubenswrapper[5129]: set -uo pipefail Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 11 16:55:38 crc kubenswrapper[5129]: HOSTS_FILE="/etc/hosts" Dec 11 16:55:38 crc kubenswrapper[5129]: TEMP_FILE="/tmp/hosts.tmp" Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: # Make a temporary file with the old hosts file's attributes. Dec 11 16:55:38 crc kubenswrapper[5129]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 11 16:55:38 crc kubenswrapper[5129]: echo "Failed to preserve hosts file. Exiting." Dec 11 16:55:38 crc kubenswrapper[5129]: exit 1 Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: while true; do Dec 11 16:55:38 crc kubenswrapper[5129]: declare -A svc_ips Dec 11 16:55:38 crc kubenswrapper[5129]: for svc in "${services[@]}"; do Dec 11 16:55:38 crc kubenswrapper[5129]: # Fetch service IP from cluster dns if present. We make several tries Dec 11 16:55:38 crc kubenswrapper[5129]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 11 16:55:38 crc kubenswrapper[5129]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 11 16:55:38 crc kubenswrapper[5129]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 11 16:55:38 crc kubenswrapper[5129]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 11 16:55:38 crc kubenswrapper[5129]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 11 16:55:38 crc kubenswrapper[5129]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 11 16:55:38 crc kubenswrapper[5129]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 11 16:55:38 crc kubenswrapper[5129]: for i in ${!cmds[*]} Dec 11 16:55:38 crc kubenswrapper[5129]: do Dec 11 16:55:38 crc kubenswrapper[5129]: ips=($(eval "${cmds[i]}")) Dec 11 16:55:38 crc kubenswrapper[5129]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 11 16:55:38 crc kubenswrapper[5129]: svc_ips["${svc}"]="${ips[@]}" Dec 11 16:55:38 crc kubenswrapper[5129]: break Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: # Update /etc/hosts only if we get valid service IPs Dec 11 16:55:38 crc kubenswrapper[5129]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 11 16:55:38 crc kubenswrapper[5129]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 11 16:55:38 crc kubenswrapper[5129]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 11 16:55:38 crc kubenswrapper[5129]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 11 16:55:38 crc kubenswrapper[5129]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 11 16:55:38 crc kubenswrapper[5129]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 11 16:55:38 crc kubenswrapper[5129]: sleep 60 & wait Dec 11 16:55:38 crc kubenswrapper[5129]: continue Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: # Append resolver entries for services Dec 11 16:55:38 crc kubenswrapper[5129]: rc=0 Dec 11 16:55:38 crc kubenswrapper[5129]: for svc in "${!svc_ips[@]}"; do Dec 11 16:55:38 crc kubenswrapper[5129]: for ip in ${svc_ips[${svc}]}; do Dec 11 16:55:38 crc kubenswrapper[5129]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: if [[ $rc -ne 0 ]]; then Dec 11 16:55:38 crc kubenswrapper[5129]: sleep 60 & wait Dec 11 16:55:38 crc kubenswrapper[5129]: continue Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: Dec 11 16:55:38 crc kubenswrapper[5129]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 11 16:55:38 crc kubenswrapper[5129]: # Replace /etc/hosts with our modified version if needed Dec 11 16:55:38 crc kubenswrapper[5129]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 11 16:55:38 crc kubenswrapper[5129]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: sleep 60 & wait Dec 11 16:55:38 crc kubenswrapper[5129]: unset svc_ips Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lw8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-spxfg_openshift-dns(a0974084-197d-495d-b227-4ea7d61426c6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:38 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.952037 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-spxfg" podUID="a0974084-197d-495d-b227-4ea7d61426c6" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.952375 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t8chw" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.962956 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4nql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9gtgq_openshift-machine-config-operator(b9f3b447-4c51-44f3-9ade-21b54c3a6daf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.965994 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.967720 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4nql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9gtgq_openshift-machine-config-operator(b9f3b447-4c51-44f3-9ade-21b54c3a6daf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.968959 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" Dec 11 16:55:38 crc kubenswrapper[5129]: W1211 16:55:38.968961 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45767eb3_dd9a_4116_a1d6_a0e107c053ac.slice/crio-14e0f7105887b4626ec2fcfa912b5f97db0616d9556db675defc40bb3b5c929a WatchSource:0}: Error finding container 14e0f7105887b4626ec2fcfa912b5f97db0616d9556db675defc40bb3b5c929a: Status 404 returned error can't find the container with id 14e0f7105887b4626ec2fcfa912b5f97db0616d9556db675defc40bb3b5c929a Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.975891 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:38 crc kubenswrapper[5129]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 11 16:55:38 crc kubenswrapper[5129]: while [ true ]; Dec 11 16:55:38 crc kubenswrapper[5129]: do Dec 11 16:55:38 crc kubenswrapper[5129]: for f in $(ls /tmp/serviceca); do Dec 11 16:55:38 crc kubenswrapper[5129]: echo $f Dec 11 16:55:38 crc kubenswrapper[5129]: ca_file_path="/tmp/serviceca/${f}" Dec 11 16:55:38 crc kubenswrapper[5129]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 11 16:55:38 crc kubenswrapper[5129]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 11 16:55:38 crc kubenswrapper[5129]: if [ -e "${reg_dir_path}" ]; then Dec 11 16:55:38 crc kubenswrapper[5129]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 11 16:55:38 crc kubenswrapper[5129]: else Dec 11 16:55:38 crc kubenswrapper[5129]: mkdir $reg_dir_path Dec 11 16:55:38 crc kubenswrapper[5129]: cp $ca_file_path $reg_dir_path/ca.crt Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: for d in $(ls /etc/docker/certs.d); do Dec 11 16:55:38 crc kubenswrapper[5129]: echo $d Dec 11 16:55:38 crc kubenswrapper[5129]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 11 16:55:38 crc kubenswrapper[5129]: reg_conf_path="/tmp/serviceca/${dp}" Dec 11 16:55:38 crc kubenswrapper[5129]: if [ ! -e "${reg_conf_path}" ]; then Dec 11 16:55:38 crc kubenswrapper[5129]: rm -rf /etc/docker/certs.d/$d Dec 11 16:55:38 crc kubenswrapper[5129]: fi Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: sleep 60 & wait ${!} Dec 11 16:55:38 crc kubenswrapper[5129]: done Dec 11 16:55:38 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xrc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-t8chw_openshift-image-registry(45767eb3-dd9a-4116-a1d6-a0e107c053ac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:38 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.977104 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-t8chw" podUID="45767eb3-dd9a-4116-a1d6-a0e107c053ac" Dec 11 16:55:38 crc kubenswrapper[5129]: I1211 16:55:38.982615 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m95zr" Dec 11 16:55:38 crc kubenswrapper[5129]: W1211 16:55:38.984929 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bfafb25_f61d_4c63_8e1e_9cba0778559a.slice/crio-e601bcfe915d79538a0809522d0aec5188d507aaf71bc852ea26b15c1d7f9559 WatchSource:0}: Error finding container e601bcfe915d79538a0809522d0aec5188d507aaf71bc852ea26b15c1d7f9559: Status 404 returned error can't find the container with id e601bcfe915d79538a0809522d0aec5188d507aaf71bc852ea26b15c1d7f9559 Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.988897 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:38 crc kubenswrapper[5129]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 11 16:55:38 crc kubenswrapper[5129]: apiVersion: v1 Dec 11 16:55:38 crc kubenswrapper[5129]: clusters: Dec 11 16:55:38 crc kubenswrapper[5129]: - cluster: Dec 11 16:55:38 crc kubenswrapper[5129]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 11 16:55:38 crc kubenswrapper[5129]: server: https://api-int.crc.testing:6443 Dec 11 16:55:38 crc kubenswrapper[5129]: name: default-cluster Dec 11 16:55:38 crc kubenswrapper[5129]: contexts: Dec 11 16:55:38 crc kubenswrapper[5129]: - context: Dec 11 16:55:38 crc kubenswrapper[5129]: cluster: default-cluster Dec 11 16:55:38 crc kubenswrapper[5129]: namespace: default Dec 11 16:55:38 crc kubenswrapper[5129]: user: default-auth Dec 11 16:55:38 crc kubenswrapper[5129]: name: default-context Dec 11 16:55:38 crc kubenswrapper[5129]: current-context: default-context Dec 11 16:55:38 crc kubenswrapper[5129]: kind: Config Dec 11 16:55:38 crc kubenswrapper[5129]: preferences: {} Dec 11 16:55:38 crc kubenswrapper[5129]: users: Dec 11 16:55:38 crc kubenswrapper[5129]: - name: default-auth Dec 11 16:55:38 crc kubenswrapper[5129]: user: Dec 11 16:55:38 crc kubenswrapper[5129]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 11 16:55:38 crc kubenswrapper[5129]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 11 16:55:38 crc kubenswrapper[5129]: EOF Dec 11 16:55:38 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2khpc_openshift-ovn-kubernetes(8bfafb25-f61d-4c63-8e1e-9cba0778559a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:38 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:38 crc kubenswrapper[5129]: E1211 16:55:38.990051 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" Dec 11 16:55:38 crc kubenswrapper[5129]: W1211 16:55:38.997465 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5313889a_2681_4f68_96f8_d5dfea8d3a8b.slice/crio-b64e057247f799939bbf09e347eca1d8e71b87c87b483eabb0a0aca99e48f779 WatchSource:0}: Error finding container b64e057247f799939bbf09e347eca1d8e71b87c87b483eabb0a0aca99e48f779: Status 404 returned error can't find the container with id b64e057247f799939bbf09e347eca1d8e71b87c87b483eabb0a0aca99e48f779 Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.001035 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 11 16:55:39 crc kubenswrapper[5129]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 11 16:55:39 crc kubenswrapper[5129]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-m95zr_openshift-multus(5313889a-2681-4f68-96f8-d5dfea8d3a8b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.002295 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-m95zr" podUID="5313889a-2681-4f68-96f8-d5dfea8d3a8b" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.007254 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" Dec 11 16:55:39 crc kubenswrapper[5129]: W1211 16:55:39.017780 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5c4751_c0b7_476b_a553_042ed9d66177.slice/crio-07b2d30c98512d37bd3c2cbd76ffbb0d37686eca3126cc750e4a77c91e3b547e WatchSource:0}: Error finding container 07b2d30c98512d37bd3c2cbd76ffbb0d37686eca3126cc750e4a77c91e3b547e: Status 404 returned error can't find the container with id 07b2d30c98512d37bd3c2cbd76ffbb0d37686eca3126cc750e4a77c91e3b547e Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.020131 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r4lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-sdzh7_openshift-multus(0e5c4751-c0b7-476b-a553-042ed9d66177): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.021362 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" podUID="0e5c4751-c0b7-476b-a553-042ed9d66177" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.025118 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.025152 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.025161 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.025174 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.025184 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.032505 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 16:55:39 crc kubenswrapper[5129]: W1211 16:55:39.042786 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c60ead5_8f9c_4cc8_9a60_27d7967e1f2e.slice/crio-3e091fc2619d0e7d7e4020b59e13b62971c3ad6b881a82129fa3e56e98095cee WatchSource:0}: Error finding container 3e091fc2619d0e7d7e4020b59e13b62971c3ad6b881a82129fa3e56e98095cee: Status 404 returned error can't find the container with id 3e091fc2619d0e7d7e4020b59e13b62971c3ad6b881a82129fa3e56e98095cee Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.045315 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 11 16:55:39 crc kubenswrapper[5129]: set -euo pipefail Dec 11 16:55:39 crc kubenswrapper[5129]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 11 16:55:39 crc kubenswrapper[5129]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 11 16:55:39 crc kubenswrapper[5129]: # As the secret mount is optional we must wait for the files to be present. Dec 11 16:55:39 crc kubenswrapper[5129]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 11 16:55:39 crc kubenswrapper[5129]: TS=$(date +%s) Dec 11 16:55:39 crc kubenswrapper[5129]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 11 16:55:39 crc kubenswrapper[5129]: HAS_LOGGED_INFO=0 Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: log_missing_certs(){ Dec 11 16:55:39 crc kubenswrapper[5129]: CUR_TS=$(date +%s) Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 11 16:55:39 crc kubenswrapper[5129]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 11 16:55:39 crc kubenswrapper[5129]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 11 16:55:39 crc kubenswrapper[5129]: HAS_LOGGED_INFO=1 Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: } Dec 11 16:55:39 crc kubenswrapper[5129]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 11 16:55:39 crc kubenswrapper[5129]: log_missing_certs Dec 11 16:55:39 crc kubenswrapper[5129]: sleep 5 Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 11 16:55:39 crc kubenswrapper[5129]: exec /usr/bin/kube-rbac-proxy \ Dec 11 16:55:39 crc kubenswrapper[5129]: --logtostderr \ Dec 11 16:55:39 crc kubenswrapper[5129]: --secure-listen-address=:9108 \ Dec 11 16:55:39 crc kubenswrapper[5129]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 11 16:55:39 crc kubenswrapper[5129]: --upstream=http://127.0.0.1:29108/ \ Dec 11 16:55:39 crc kubenswrapper[5129]: --tls-private-key-file=${TLS_PK} \ Dec 11 16:55:39 crc kubenswrapper[5129]: --tls-cert-file=${TLS_CERT} Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnwmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-h4rqc_openshift-ovn-kubernetes(2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.047386 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ -f "/env/_master" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: source "/env/_master" Dec 11 16:55:39 crc kubenswrapper[5129]: set +o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_join_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_join_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_transit_switch_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_transit_switch_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: dns_name_resolver_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # This is needed so that converting clusters from GA to TP Dec 11 16:55:39 crc kubenswrapper[5129]: # will rollout control plane pods as well Dec 11 16:55:39 crc kubenswrapper[5129]: network_segmentation_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_enabled_flag="--enable-multi-network" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" != "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_enabled_flag="--enable-multi-network" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: route_advertisements_enable_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: preconfigured_udn_addresses_enable_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # Enable multi-network policy if configured (control-plane always full mode) Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_policy_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # Enable admin network policy if configured (control-plane always full mode) Dec 11 16:55:39 crc kubenswrapper[5129]: admin_network_policy_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: if [ "shared" == "shared" ]; then Dec 11 16:55:39 crc kubenswrapper[5129]: gateway_mode_flags="--gateway-mode shared" Dec 11 16:55:39 crc kubenswrapper[5129]: elif [ "shared" == "local" ]; then Dec 11 16:55:39 crc kubenswrapper[5129]: gateway_mode_flags="--gateway-mode local" Dec 11 16:55:39 crc kubenswrapper[5129]: else Dec 11 16:55:39 crc kubenswrapper[5129]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 11 16:55:39 crc kubenswrapper[5129]: exit 1 Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 11 16:55:39 crc kubenswrapper[5129]: exec /usr/bin/ovnkube \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-interconnect \ Dec 11 16:55:39 crc kubenswrapper[5129]: --init-cluster-manager "${K8S_NODE}" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 11 16:55:39 crc kubenswrapper[5129]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --metrics-bind-address "127.0.0.1:29108" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --metrics-enable-pprof \ Dec 11 16:55:39 crc kubenswrapper[5129]: --metrics-enable-config-duration \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v4_join_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v6_join_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${dns_name_resolver_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${persistent_ips_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${multi_network_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${network_segmentation_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${gateway_mode_flags} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${route_advertisements_enable_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${preconfigured_udn_addresses_enable_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-ip=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-firewall=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-qos=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-service=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-multicast \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-multi-external-gateway=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${multi_network_policy_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${admin_network_policy_enabled_flag} Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnwmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-h4rqc_openshift-ovn-kubernetes(2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.048621 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.127760 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.127821 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.127836 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.127857 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.127873 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.201161 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.201342 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.201436 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:55:40.201391229 +0000 UTC m=+84.004921286 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.201496 5129 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.201603 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.201616 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:40.201591814 +0000 UTC m=+84.005121851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.201749 5129 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.201924 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:40.201894363 +0000 UTC m=+84.005424430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.230047 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.230106 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.230125 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.230152 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.230171 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.302839 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.302901 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.302946 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303064 5129 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303076 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303095 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303107 5129 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303145 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs podName:15d52990-0733-45fe-ac96-429a9503dbab nodeName:}" failed. No retries permitted until 2025-12-11 16:55:40.303127292 +0000 UTC m=+84.106657319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs") pod "network-metrics-daemon-fptr2" (UID: "15d52990-0733-45fe-ac96-429a9503dbab") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303164 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:40.303155893 +0000 UTC m=+84.106685920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303158 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303218 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303236 5129 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.303335 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:40.303311578 +0000 UTC m=+84.106841595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.333946 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.333991 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.334002 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.334016 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.334025 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.436249 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.436310 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.436328 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.436352 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.436371 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.538483 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.538605 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.538631 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.538665 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.538691 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.641581 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.641671 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.641698 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.641732 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.641758 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.744257 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.744335 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.744362 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.744394 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.744419 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.846412 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.846534 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.846550 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.846568 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.846580 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.880340 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"1509e4d21a648111942dd9cadfe9f7925ff90e02144fb4619cc9272dd387a813"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.882975 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 11 16:55:39 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: source /etc/kubernetes/apiserver-url.env Dec 11 16:55:39 crc kubenswrapper[5129]: else Dec 11 16:55:39 crc kubenswrapper[5129]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 11 16:55:39 crc kubenswrapper[5129]: exit 1 Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.884366 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"447114bba3ad3c6156dc9372bcb2fed2c0f1d538609a213ea59222bfe7c34650"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.884379 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.886823 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4nql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9gtgq_openshift-machine-config-operator(b9f3b447-4c51-44f3-9ade-21b54c3a6daf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.887270 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" event={"ID":"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e","Type":"ContainerStarted","Data":"3e091fc2619d0e7d7e4020b59e13b62971c3ad6b881a82129fa3e56e98095cee"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.888330 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m95zr" event={"ID":"5313889a-2681-4f68-96f8-d5dfea8d3a8b","Type":"ContainerStarted","Data":"b64e057247f799939bbf09e347eca1d8e71b87c87b483eabb0a0aca99e48f779"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.889768 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 11 16:55:39 crc kubenswrapper[5129]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 11 16:55:39 crc kubenswrapper[5129]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-m95zr_openshift-multus(5313889a-2681-4f68-96f8-d5dfea8d3a8b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.889793 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 11 16:55:39 crc kubenswrapper[5129]: set -euo pipefail Dec 11 16:55:39 crc kubenswrapper[5129]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 11 16:55:39 crc kubenswrapper[5129]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 11 16:55:39 crc kubenswrapper[5129]: # As the secret mount is optional we must wait for the files to be present. Dec 11 16:55:39 crc kubenswrapper[5129]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 11 16:55:39 crc kubenswrapper[5129]: TS=$(date +%s) Dec 11 16:55:39 crc kubenswrapper[5129]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 11 16:55:39 crc kubenswrapper[5129]: HAS_LOGGED_INFO=0 Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: log_missing_certs(){ Dec 11 16:55:39 crc kubenswrapper[5129]: CUR_TS=$(date +%s) Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 11 16:55:39 crc kubenswrapper[5129]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 11 16:55:39 crc kubenswrapper[5129]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 11 16:55:39 crc kubenswrapper[5129]: HAS_LOGGED_INFO=1 Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: } Dec 11 16:55:39 crc kubenswrapper[5129]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 11 16:55:39 crc kubenswrapper[5129]: log_missing_certs Dec 11 16:55:39 crc kubenswrapper[5129]: sleep 5 Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 11 16:55:39 crc kubenswrapper[5129]: exec /usr/bin/kube-rbac-proxy \ Dec 11 16:55:39 crc kubenswrapper[5129]: --logtostderr \ Dec 11 16:55:39 crc kubenswrapper[5129]: --secure-listen-address=:9108 \ Dec 11 16:55:39 crc kubenswrapper[5129]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 11 16:55:39 crc kubenswrapper[5129]: --upstream=http://127.0.0.1:29108/ \ Dec 11 16:55:39 crc kubenswrapper[5129]: --tls-private-key-file=${TLS_PK} \ Dec 11 16:55:39 crc kubenswrapper[5129]: --tls-cert-file=${TLS_CERT} Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnwmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-h4rqc_openshift-ovn-kubernetes(2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.890585 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"9359c9fbdc006142894fe3656a576c65244eb8dfb5332b272290ab8e5ff3e717"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.890888 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4nql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-9gtgq_openshift-machine-config-operator(b9f3b447-4c51-44f3-9ade-21b54c3a6daf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.891302 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-m95zr" podUID="5313889a-2681-4f68-96f8-d5dfea8d3a8b" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.891930 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.892018 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.892766 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ -f "/env/_master" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: source "/env/_master" Dec 11 16:55:39 crc kubenswrapper[5129]: set +o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_join_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_join_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_transit_switch_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_transit_switch_subnet_opt= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "" != "" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: dns_name_resolver_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # This is needed so that converting clusters from GA to TP Dec 11 16:55:39 crc kubenswrapper[5129]: # will rollout control plane pods as well Dec 11 16:55:39 crc kubenswrapper[5129]: network_segmentation_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_enabled_flag="--enable-multi-network" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" != "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_enabled_flag="--enable-multi-network" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: route_advertisements_enable_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: preconfigured_udn_addresses_enable_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # Enable multi-network policy if configured (control-plane always full mode) Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_policy_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "false" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # Enable admin network policy if configured (control-plane always full mode) Dec 11 16:55:39 crc kubenswrapper[5129]: admin_network_policy_enabled_flag= Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "true" == "true" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: if [ "shared" == "shared" ]; then Dec 11 16:55:39 crc kubenswrapper[5129]: gateway_mode_flags="--gateway-mode shared" Dec 11 16:55:39 crc kubenswrapper[5129]: elif [ "shared" == "local" ]; then Dec 11 16:55:39 crc kubenswrapper[5129]: gateway_mode_flags="--gateway-mode local" Dec 11 16:55:39 crc kubenswrapper[5129]: else Dec 11 16:55:39 crc kubenswrapper[5129]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 11 16:55:39 crc kubenswrapper[5129]: exit 1 Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 11 16:55:39 crc kubenswrapper[5129]: exec /usr/bin/ovnkube \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-interconnect \ Dec 11 16:55:39 crc kubenswrapper[5129]: --init-cluster-manager "${K8S_NODE}" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 11 16:55:39 crc kubenswrapper[5129]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --metrics-bind-address "127.0.0.1:29108" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --metrics-enable-pprof \ Dec 11 16:55:39 crc kubenswrapper[5129]: --metrics-enable-config-duration \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v4_join_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v6_join_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${dns_name_resolver_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${persistent_ips_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${multi_network_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${network_segmentation_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${gateway_mode_flags} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${route_advertisements_enable_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${preconfigured_udn_addresses_enable_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-ip=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-firewall=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-qos=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-egress-service=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-multicast \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-multi-external-gateway=true \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${multi_network_policy_enabled_flag} \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${admin_network_policy_enabled_flag} Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnwmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-h4rqc_openshift-ovn-kubernetes(2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.893127 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.893458 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerStarted","Data":"07b2d30c98512d37bd3c2cbd76ffbb0d37686eca3126cc750e4a77c91e3b547e"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.894034 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.895948 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-spxfg" event={"ID":"a0974084-197d-495d-b227-4ea7d61426c6","Type":"ContainerStarted","Data":"8eab0a96b438c3ce83c1d761c8dd609f186ebfab1dd2926b210267e6390119f5"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.896353 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r4lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-sdzh7_openshift-multus(0e5c4751-c0b7-476b-a553-042ed9d66177): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.898254 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 11 16:55:39 crc kubenswrapper[5129]: set -uo pipefail Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 11 16:55:39 crc kubenswrapper[5129]: HOSTS_FILE="/etc/hosts" Dec 11 16:55:39 crc kubenswrapper[5129]: TEMP_FILE="/tmp/hosts.tmp" Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # Make a temporary file with the old hosts file's attributes. Dec 11 16:55:39 crc kubenswrapper[5129]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 11 16:55:39 crc kubenswrapper[5129]: echo "Failed to preserve hosts file. Exiting." Dec 11 16:55:39 crc kubenswrapper[5129]: exit 1 Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: while true; do Dec 11 16:55:39 crc kubenswrapper[5129]: declare -A svc_ips Dec 11 16:55:39 crc kubenswrapper[5129]: for svc in "${services[@]}"; do Dec 11 16:55:39 crc kubenswrapper[5129]: # Fetch service IP from cluster dns if present. We make several tries Dec 11 16:55:39 crc kubenswrapper[5129]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 11 16:55:39 crc kubenswrapper[5129]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 11 16:55:39 crc kubenswrapper[5129]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 11 16:55:39 crc kubenswrapper[5129]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 11 16:55:39 crc kubenswrapper[5129]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 11 16:55:39 crc kubenswrapper[5129]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 11 16:55:39 crc kubenswrapper[5129]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 11 16:55:39 crc kubenswrapper[5129]: for i in ${!cmds[*]} Dec 11 16:55:39 crc kubenswrapper[5129]: do Dec 11 16:55:39 crc kubenswrapper[5129]: ips=($(eval "${cmds[i]}")) Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: svc_ips["${svc}"]="${ips[@]}" Dec 11 16:55:39 crc kubenswrapper[5129]: break Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # Update /etc/hosts only if we get valid service IPs Dec 11 16:55:39 crc kubenswrapper[5129]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 11 16:55:39 crc kubenswrapper[5129]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 11 16:55:39 crc kubenswrapper[5129]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 11 16:55:39 crc kubenswrapper[5129]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 11 16:55:39 crc kubenswrapper[5129]: sleep 60 & wait Dec 11 16:55:39 crc kubenswrapper[5129]: continue Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # Append resolver entries for services Dec 11 16:55:39 crc kubenswrapper[5129]: rc=0 Dec 11 16:55:39 crc kubenswrapper[5129]: for svc in "${!svc_ips[@]}"; do Dec 11 16:55:39 crc kubenswrapper[5129]: for ip in ${svc_ips[${svc}]}; do Dec 11 16:55:39 crc kubenswrapper[5129]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ $rc -ne 0 ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: sleep 60 & wait Dec 11 16:55:39 crc kubenswrapper[5129]: continue Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 11 16:55:39 crc kubenswrapper[5129]: # Replace /etc/hosts with our modified version if needed Dec 11 16:55:39 crc kubenswrapper[5129]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 11 16:55:39 crc kubenswrapper[5129]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: sleep 60 & wait Dec 11 16:55:39 crc kubenswrapper[5129]: unset svc_ips Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lw8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-spxfg_openshift-dns(a0974084-197d-495d-b227-4ea7d61426c6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.898602 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"ab94024b72fbe859ce9bc224e53798c718c2bb2b6d395dc9ada679002987d775"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.899438 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" podUID="0e5c4751-c0b7-476b-a553-042ed9d66177" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.899459 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-spxfg" podUID="a0974084-197d-495d-b227-4ea7d61426c6" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.900905 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ -f "/env/_master" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: source "/env/_master" Dec 11 16:55:39 crc kubenswrapper[5129]: set +o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 11 16:55:39 crc kubenswrapper[5129]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 11 16:55:39 crc kubenswrapper[5129]: ho_enable="--enable-hybrid-overlay" Dec 11 16:55:39 crc kubenswrapper[5129]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 11 16:55:39 crc kubenswrapper[5129]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 11 16:55:39 crc kubenswrapper[5129]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 11 16:55:39 crc kubenswrapper[5129]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 11 16:55:39 crc kubenswrapper[5129]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --webhook-host=127.0.0.1 \ Dec 11 16:55:39 crc kubenswrapper[5129]: --webhook-port=9743 \ Dec 11 16:55:39 crc kubenswrapper[5129]: ${ho_enable} \ Dec 11 16:55:39 crc kubenswrapper[5129]: --enable-interconnect \ Dec 11 16:55:39 crc kubenswrapper[5129]: --disable-approver \ Dec 11 16:55:39 crc kubenswrapper[5129]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --wait-for-kubernetes-api=200s \ Dec 11 16:55:39 crc kubenswrapper[5129]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --loglevel="${LOGLEVEL}" Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.901166 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.902769 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"e601bcfe915d79538a0809522d0aec5188d507aaf71bc852ea26b15c1d7f9559"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.905639 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 11 16:55:39 crc kubenswrapper[5129]: apiVersion: v1 Dec 11 16:55:39 crc kubenswrapper[5129]: clusters: Dec 11 16:55:39 crc kubenswrapper[5129]: - cluster: Dec 11 16:55:39 crc kubenswrapper[5129]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 11 16:55:39 crc kubenswrapper[5129]: server: https://api-int.crc.testing:6443 Dec 11 16:55:39 crc kubenswrapper[5129]: name: default-cluster Dec 11 16:55:39 crc kubenswrapper[5129]: contexts: Dec 11 16:55:39 crc kubenswrapper[5129]: - context: Dec 11 16:55:39 crc kubenswrapper[5129]: cluster: default-cluster Dec 11 16:55:39 crc kubenswrapper[5129]: namespace: default Dec 11 16:55:39 crc kubenswrapper[5129]: user: default-auth Dec 11 16:55:39 crc kubenswrapper[5129]: name: default-context Dec 11 16:55:39 crc kubenswrapper[5129]: current-context: default-context Dec 11 16:55:39 crc kubenswrapper[5129]: kind: Config Dec 11 16:55:39 crc kubenswrapper[5129]: preferences: {} Dec 11 16:55:39 crc kubenswrapper[5129]: users: Dec 11 16:55:39 crc kubenswrapper[5129]: - name: default-auth Dec 11 16:55:39 crc kubenswrapper[5129]: user: Dec 11 16:55:39 crc kubenswrapper[5129]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 11 16:55:39 crc kubenswrapper[5129]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 11 16:55:39 crc kubenswrapper[5129]: EOF Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2khpc_openshift-ovn-kubernetes(8bfafb25-f61d-4c63-8e1e-9cba0778559a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.906259 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t8chw" event={"ID":"45767eb3-dd9a-4116-a1d6-a0e107c053ac","Type":"ContainerStarted","Data":"14e0f7105887b4626ec2fcfa912b5f97db0616d9556db675defc40bb3b5c929a"} Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.907756 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.907865 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 11 16:55:39 crc kubenswrapper[5129]: if [[ -f "/env/_master" ]]; then Dec 11 16:55:39 crc kubenswrapper[5129]: set -o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: source "/env/_master" Dec 11 16:55:39 crc kubenswrapper[5129]: set +o allexport Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: Dec 11 16:55:39 crc kubenswrapper[5129]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 11 16:55:39 crc kubenswrapper[5129]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 11 16:55:39 crc kubenswrapper[5129]: --disable-webhook \ Dec 11 16:55:39 crc kubenswrapper[5129]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 11 16:55:39 crc kubenswrapper[5129]: --loglevel="${LOGLEVEL}" Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.908970 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.909483 5129 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 11 16:55:39 crc kubenswrapper[5129]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 11 16:55:39 crc kubenswrapper[5129]: while [ true ]; Dec 11 16:55:39 crc kubenswrapper[5129]: do Dec 11 16:55:39 crc kubenswrapper[5129]: for f in $(ls /tmp/serviceca); do Dec 11 16:55:39 crc kubenswrapper[5129]: echo $f Dec 11 16:55:39 crc kubenswrapper[5129]: ca_file_path="/tmp/serviceca/${f}" Dec 11 16:55:39 crc kubenswrapper[5129]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 11 16:55:39 crc kubenswrapper[5129]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 11 16:55:39 crc kubenswrapper[5129]: if [ -e "${reg_dir_path}" ]; then Dec 11 16:55:39 crc kubenswrapper[5129]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 11 16:55:39 crc kubenswrapper[5129]: else Dec 11 16:55:39 crc kubenswrapper[5129]: mkdir $reg_dir_path Dec 11 16:55:39 crc kubenswrapper[5129]: cp $ca_file_path $reg_dir_path/ca.crt Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: for d in $(ls /etc/docker/certs.d); do Dec 11 16:55:39 crc kubenswrapper[5129]: echo $d Dec 11 16:55:39 crc kubenswrapper[5129]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 11 16:55:39 crc kubenswrapper[5129]: reg_conf_path="/tmp/serviceca/${dp}" Dec 11 16:55:39 crc kubenswrapper[5129]: if [ ! -e "${reg_conf_path}" ]; then Dec 11 16:55:39 crc kubenswrapper[5129]: rm -rf /etc/docker/certs.d/$d Dec 11 16:55:39 crc kubenswrapper[5129]: fi Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: sleep 60 & wait ${!} Dec 11 16:55:39 crc kubenswrapper[5129]: done Dec 11 16:55:39 crc kubenswrapper[5129]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xrc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-t8chw_openshift-image-registry(45767eb3-dd9a-4116-a1d6-a0e107c053ac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 11 16:55:39 crc kubenswrapper[5129]: > logger="UnhandledError" Dec 11 16:55:39 crc kubenswrapper[5129]: E1211 16:55:39.910716 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-t8chw" podUID="45767eb3-dd9a-4116-a1d6-a0e107c053ac" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.918741 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.930118 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45767eb3-dd9a-4116-a1d6-a0e107c053ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xrc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.948912 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb249f8f-9a28-4c68-91ed-0a729945afdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d512a17000ca709c3c084a435e8fcbecf28038516c0a11190f2385d68ae16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5bdd0c143fa7e8812638159329a3e152d6d88c66c8e0fb790ae35c0ded8176e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5c934b2c22637164c8d767636f1daecb334588708bfe1bad7c8292922847f7ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.949826 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.949886 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.949906 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.949931 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.949954 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:39Z","lastTransitionTime":"2025-12-11T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.959965 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.968890 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:39 crc kubenswrapper[5129]: I1211 16:55:39.990037 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"264fc91e-68dd-4c06-8008-a8942f0078d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5654ca63508057b717f80c16ebe5d6d0766d4282449ac01571c7a04945749180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1983a596cbbf41969328c6642b06b8abba3cc5ae8b162c4d87603de486e45587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e16a35e61d8e2ff1ef59921f54ada877c2429ae4dd9b1dfda1ef5de602cea580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://14aaa84bd234f14470da0a92e12408314e20785eb32082c15df56c66488831bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://09ab56e9007d2a650254d1000ce66094953c4e0e92b21cf18755434ff792f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.007271 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T16:55:21Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911350 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.910728 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI1211 16:55:21.911482 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911587 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911750 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\"\\\\nI1211 16:55:21.912325 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765472120\\\\\\\\\\\\\\\" (2025-12-11 16:55:20 +0000 UTC to 2025-12-11 16:55:21 +0000 UTC (now=2025-12-11 16:55:21.912292985 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914388 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1765472121\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1765472121\\\\\\\\\\\\\\\" (2025-12-11 15:55:21 +0000 UTC to 2028-12-11 15:55:21 +0000 UTC (now=2025-12-11 16:55:21.912560614 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914453 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI1211 16:55:21.914482 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1211 16:55:21.914557 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF1211 16:55:21.914774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T16:55:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.021301 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.030042 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-spxfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0974084-197d-495d-b227-4ea7d61426c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lw8m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-spxfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.040561 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9gtgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.050688 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.052657 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.052741 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.052771 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.052803 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.052829 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.061185 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fptr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d52990-0733-45fe-ac96-429a9503dbab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fptr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.085155 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfafb25-f61d-4c63-8e1e-9cba0778559a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2khpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.096568 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-m95zr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5313889a-2681-4f68-96f8-d5dfea8d3a8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdznl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m95zr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.114100 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5c4751-c0b7-476b-a553-042ed9d66177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.123485 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-h4rqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.136187 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"634ba037-86a0-4350-86e6-ff15f9395f74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32ffe9b5be1ad35ddd9febeb1f98d097ff984ae3bd337ebbbe14d99170d8489a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4c5571003912b3a12d9b8e7230f22fd588dae784e943736ea11373f2dcd2baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df646ec52f7a1cf49d9303ebccd8de6422fa94c4907a596b63278216fc07ebcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.152902 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf7945d-7e6c-4b24-854b-268b781347c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8faab81a2b9f03a74368e14568cc8b7b928132eef181ee297d2fbad86f5fb194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.154735 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.154914 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.155232 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.155538 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.156248 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.164978 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.174594 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fptr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d52990-0733-45fe-ac96-429a9503dbab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fptr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.198001 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfafb25-f61d-4c63-8e1e-9cba0778559a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2khpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.209920 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-m95zr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5313889a-2681-4f68-96f8-d5dfea8d3a8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdznl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m95zr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.212979 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.213136 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.213240 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:55:42.213216479 +0000 UTC m=+86.016746546 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.213278 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.213334 5129 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.213421 5129 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.213421 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:42.213397305 +0000 UTC m=+86.016927362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.213477 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:42.213467897 +0000 UTC m=+86.016998024 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.222598 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5c4751-c0b7-476b-a553-042ed9d66177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.233912 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-h4rqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.243787 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"634ba037-86a0-4350-86e6-ff15f9395f74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32ffe9b5be1ad35ddd9febeb1f98d097ff984ae3bd337ebbbe14d99170d8489a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4c5571003912b3a12d9b8e7230f22fd588dae784e943736ea11373f2dcd2baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df646ec52f7a1cf49d9303ebccd8de6422fa94c4907a596b63278216fc07ebcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.250589 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf7945d-7e6c-4b24-854b-268b781347c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8faab81a2b9f03a74368e14568cc8b7b928132eef181ee297d2fbad86f5fb194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.257729 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.257784 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.257797 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.257813 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.257824 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.261100 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.268721 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.275413 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45767eb3-dd9a-4116-a1d6-a0e107c053ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xrc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.284351 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb249f8f-9a28-4c68-91ed-0a729945afdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d512a17000ca709c3c084a435e8fcbecf28038516c0a11190f2385d68ae16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5bdd0c143fa7e8812638159329a3e152d6d88c66c8e0fb790ae35c0ded8176e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5c934b2c22637164c8d767636f1daecb334588708bfe1bad7c8292922847f7ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.292149 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.299113 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.313748 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"264fc91e-68dd-4c06-8008-a8942f0078d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5654ca63508057b717f80c16ebe5d6d0766d4282449ac01571c7a04945749180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1983a596cbbf41969328c6642b06b8abba3cc5ae8b162c4d87603de486e45587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e16a35e61d8e2ff1ef59921f54ada877c2429ae4dd9b1dfda1ef5de602cea580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://14aaa84bd234f14470da0a92e12408314e20785eb32082c15df56c66488831bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://09ab56e9007d2a650254d1000ce66094953c4e0e92b21cf18755434ff792f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.313933 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.313962 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.314000 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314055 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314068 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314077 5129 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314080 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314094 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314103 5129 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314129 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:42.314114667 +0000 UTC m=+86.117644684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314142 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:42.314137028 +0000 UTC m=+86.117667045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314204 5129 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.314296 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs podName:15d52990-0733-45fe-ac96-429a9503dbab nodeName:}" failed. No retries permitted until 2025-12-11 16:55:42.314275342 +0000 UTC m=+86.117805379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs") pod "network-metrics-daemon-fptr2" (UID: "15d52990-0733-45fe-ac96-429a9503dbab") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.324740 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T16:55:21Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911350 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.910728 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI1211 16:55:21.911482 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911587 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911750 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\"\\\\nI1211 16:55:21.912325 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765472120\\\\\\\\\\\\\\\" (2025-12-11 16:55:20 +0000 UTC to 2025-12-11 16:55:21 +0000 UTC (now=2025-12-11 16:55:21.912292985 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914388 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1765472121\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1765472121\\\\\\\\\\\\\\\" (2025-12-11 15:55:21 +0000 UTC to 2028-12-11 15:55:21 +0000 UTC (now=2025-12-11 16:55:21.912560614 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914453 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI1211 16:55:21.914482 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1211 16:55:21.914557 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF1211 16:55:21.914774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T16:55:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.335359 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.342873 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-spxfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0974084-197d-495d-b227-4ea7d61426c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lw8m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-spxfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.351663 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9gtgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.360064 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.360091 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.360136 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.360149 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.360159 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.462117 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.462231 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.462256 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.462286 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.462308 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.520155 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.520157 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.520374 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.520381 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.520506 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.520169 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.520695 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:40 crc kubenswrapper[5129]: E1211 16:55:40.521206 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.527787 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.529073 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.531789 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.534340 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.538775 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.542315 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.545070 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.547008 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.548300 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.551417 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.553486 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.557174 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.559018 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.562769 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.563747 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.565434 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.565540 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.565569 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.565602 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.565625 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.566224 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.568023 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.570674 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.572417 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.574342 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.577031 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.580745 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.582715 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.584788 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.587418 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.589196 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.591686 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.593071 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.598094 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.599346 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.601424 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.604414 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.607843 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.610014 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.611901 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.613057 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.615800 5129 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.616059 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.622456 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.624414 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.626249 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.628714 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.629905 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.633788 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.635417 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.637613 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.639280 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.642278 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.644087 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.646317 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.648453 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.650651 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.652174 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.654939 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.657543 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.660037 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.662790 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.665878 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.668496 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.668580 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.668600 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.668627 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.668646 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.771198 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.771270 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.771299 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.771331 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.771355 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.874184 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.874212 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.874223 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.874237 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.874245 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.976428 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.976488 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.976507 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.976561 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:40 crc kubenswrapper[5129]: I1211 16:55:40.976583 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:40Z","lastTransitionTime":"2025-12-11T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.078986 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.079247 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.079266 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.079290 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.079309 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.181375 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.181456 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.181485 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.181556 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.181582 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.284107 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.284175 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.284193 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.284220 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.284239 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.386867 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.386935 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.386954 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.386978 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.386998 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.489904 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.489995 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.490018 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.490047 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.490065 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.592524 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.592566 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.592576 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.592590 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.592599 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.694753 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.694815 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.694887 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.694916 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.694939 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.797022 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.797094 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.797115 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.797140 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.797158 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.900272 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.900339 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.900361 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.900389 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:41 crc kubenswrapper[5129]: I1211 16:55:41.900409 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:41Z","lastTransitionTime":"2025-12-11T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.003245 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.003299 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.003311 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.003328 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.003342 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.106339 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.106460 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.106487 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.106550 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.106579 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.209400 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.209469 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.209488 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.209545 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.209564 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.234153 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.234462 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.234620 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:55:46.234575291 +0000 UTC m=+90.038105358 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.234663 5129 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.234771 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:46.234751436 +0000 UTC m=+90.038281493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.234811 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.234957 5129 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.235024 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:46.235010584 +0000 UTC m=+90.038540641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.311746 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.311817 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.311835 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.311859 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.311881 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.336454 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.336587 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.336689 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336746 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336784 5129 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336875 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs podName:15d52990-0733-45fe-ac96-429a9503dbab nodeName:}" failed. No retries permitted until 2025-12-11 16:55:46.336845342 +0000 UTC m=+90.140375399 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs") pod "network-metrics-daemon-fptr2" (UID: "15d52990-0733-45fe-ac96-429a9503dbab") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336896 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336790 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336931 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336949 5129 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.336960 5129 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.337036 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:46.337008857 +0000 UTC m=+90.140538964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.337073 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:46.337059518 +0000 UTC m=+90.140589575 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.413895 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.413959 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.413977 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.414006 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.414024 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.516649 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.516725 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.516749 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.516779 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.516804 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.520462 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.520500 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.520471 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.520682 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.520831 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.520912 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.520951 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:42 crc kubenswrapper[5129]: E1211 16:55:42.527087 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.619784 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.619848 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.619871 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.619894 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.619910 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.722361 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.722403 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.722427 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.722440 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.722448 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.824873 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.824909 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.824921 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.824936 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.824947 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.926964 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.927029 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.927048 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.927073 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:42 crc kubenswrapper[5129]: I1211 16:55:42.927092 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:42Z","lastTransitionTime":"2025-12-11T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.029196 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.029276 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.029295 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.029320 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.029338 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.132090 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.132148 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.132166 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.132201 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.132240 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.234484 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.234630 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.234657 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.234689 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.234714 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.337505 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.337617 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.337635 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.337661 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.337678 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.440096 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.440155 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.440173 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.440200 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.440218 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.542642 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.542709 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.542727 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.542752 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.542774 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.645827 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.645882 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.645901 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.645927 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.645947 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.748849 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.748909 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.748986 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.749013 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.749031 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.851851 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.851904 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.851923 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.851945 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.851963 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.954836 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.954911 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.954933 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.954970 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:43 crc kubenswrapper[5129]: I1211 16:55:43.954993 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:43Z","lastTransitionTime":"2025-12-11T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.057555 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.057650 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.057689 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.057723 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.057746 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.161170 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.161231 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.161245 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.161262 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.161275 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.263021 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.263091 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.263113 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.263138 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.263159 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.366182 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.366260 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.366286 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.366319 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.366343 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.468485 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.468604 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.468646 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.468664 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.468680 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.520206 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:44 crc kubenswrapper[5129]: E1211 16:55:44.520374 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.520500 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.520577 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.520733 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:44 crc kubenswrapper[5129]: E1211 16:55:44.520742 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:44 crc kubenswrapper[5129]: E1211 16:55:44.520839 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:44 crc kubenswrapper[5129]: E1211 16:55:44.521059 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.571683 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.571753 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.571772 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.571800 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.571819 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.673964 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.674024 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.674043 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.674070 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.674089 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.776814 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.776878 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.776901 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.776926 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.776945 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.879417 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.879490 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.879533 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.879557 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.879576 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.982560 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.982608 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.982617 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.982631 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:44 crc kubenswrapper[5129]: I1211 16:55:44.982691 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:44Z","lastTransitionTime":"2025-12-11T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.084875 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.084962 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.084997 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.085027 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.085048 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.186697 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.186752 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.186765 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.186780 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.186790 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.289833 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.289906 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.289926 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.289953 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.289974 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.392717 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.392786 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.392828 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.392863 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.392887 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.495475 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.495551 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.495567 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.495584 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.495595 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.597835 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.597892 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.597905 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.597925 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.597938 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.700716 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.700787 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.700805 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.700826 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.700843 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.803714 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.803803 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.803851 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.803886 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.803914 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.907247 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.907312 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.907335 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.907364 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:45 crc kubenswrapper[5129]: I1211 16:55:45.907386 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:45Z","lastTransitionTime":"2025-12-11T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.009987 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.010030 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.010039 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.010053 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.010063 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.112218 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.112273 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.112285 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.112302 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.112315 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.214926 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.214989 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.215009 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.215037 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.215078 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.282468 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.282708 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.282810 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.282885 5129 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.282992 5129 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.283007 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:54.282977898 +0000 UTC m=+98.086507945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.283259 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:54.283171904 +0000 UTC m=+98.086701961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.283343 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:55:54.283318179 +0000 UTC m=+98.086848376 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.316945 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.317012 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.317030 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.317056 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.317074 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.384114 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.384210 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.384342 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384443 5129 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384498 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384561 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384581 5129 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384444 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384671 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384748 5129 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384638 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs podName:15d52990-0733-45fe-ac96-429a9503dbab nodeName:}" failed. No retries permitted until 2025-12-11 16:55:54.38460165 +0000 UTC m=+98.188131707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs") pod "network-metrics-daemon-fptr2" (UID: "15d52990-0733-45fe-ac96-429a9503dbab") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384796 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:54.384776045 +0000 UTC m=+98.188306092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.384850 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:55:54.384824737 +0000 UTC m=+98.188354794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.419950 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.420284 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.420479 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.420706 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.420866 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.520343 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.520343 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.520576 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.520616 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.520806 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.520967 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.521052 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:46 crc kubenswrapper[5129]: E1211 16:55:46.521240 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.523418 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.523509 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.523563 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.523586 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.523750 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.548931 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"264fc91e-68dd-4c06-8008-a8942f0078d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5654ca63508057b717f80c16ebe5d6d0766d4282449ac01571c7a04945749180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1983a596cbbf41969328c6642b06b8abba3cc5ae8b162c4d87603de486e45587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e16a35e61d8e2ff1ef59921f54ada877c2429ae4dd9b1dfda1ef5de602cea580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://14aaa84bd234f14470da0a92e12408314e20785eb32082c15df56c66488831bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://09ab56e9007d2a650254d1000ce66094953c4e0e92b21cf18755434ff792f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.566190 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T16:55:21Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911350 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.910728 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI1211 16:55:21.911482 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911587 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911750 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\"\\\\nI1211 16:55:21.912325 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765472120\\\\\\\\\\\\\\\" (2025-12-11 16:55:20 +0000 UTC to 2025-12-11 16:55:21 +0000 UTC (now=2025-12-11 16:55:21.912292985 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914388 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1765472121\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1765472121\\\\\\\\\\\\\\\" (2025-12-11 15:55:21 +0000 UTC to 2028-12-11 15:55:21 +0000 UTC (now=2025-12-11 16:55:21.912560614 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914453 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI1211 16:55:21.914482 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1211 16:55:21.914557 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF1211 16:55:21.914774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T16:55:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.579786 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.589691 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-spxfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0974084-197d-495d-b227-4ea7d61426c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lw8m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-spxfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.602173 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9gtgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.619344 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.625291 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.625351 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.625364 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.625384 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.625395 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.627826 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fptr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d52990-0733-45fe-ac96-429a9503dbab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fptr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.650143 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfafb25-f61d-4c63-8e1e-9cba0778559a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2khpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.662887 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-m95zr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5313889a-2681-4f68-96f8-d5dfea8d3a8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdznl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m95zr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.678814 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5c4751-c0b7-476b-a553-042ed9d66177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.688857 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-h4rqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.700733 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"634ba037-86a0-4350-86e6-ff15f9395f74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32ffe9b5be1ad35ddd9febeb1f98d097ff984ae3bd337ebbbe14d99170d8489a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4c5571003912b3a12d9b8e7230f22fd588dae784e943736ea11373f2dcd2baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df646ec52f7a1cf49d9303ebccd8de6422fa94c4907a596b63278216fc07ebcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.710980 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf7945d-7e6c-4b24-854b-268b781347c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8faab81a2b9f03a74368e14568cc8b7b928132eef181ee297d2fbad86f5fb194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.727580 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.727650 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.727671 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.727698 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.727720 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.727706 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.741465 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.751291 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45767eb3-dd9a-4116-a1d6-a0e107c053ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xrc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.766850 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb249f8f-9a28-4c68-91ed-0a729945afdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d512a17000ca709c3c084a435e8fcbecf28038516c0a11190f2385d68ae16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5bdd0c143fa7e8812638159329a3e152d6d88c66c8e0fb790ae35c0ded8176e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5c934b2c22637164c8d767636f1daecb334588708bfe1bad7c8292922847f7ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.782133 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.796056 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.830589 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.830648 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.830667 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.830690 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.830708 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.933465 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.933565 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.933597 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.933622 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:46 crc kubenswrapper[5129]: I1211 16:55:46.933642 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:46Z","lastTransitionTime":"2025-12-11T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.036592 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.036656 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.036681 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.036708 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.036726 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.140208 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.140284 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.140309 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.140341 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.140366 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.242808 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.242849 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.242859 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.242873 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.242882 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.345479 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.345583 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.345604 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.345654 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.345673 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.448204 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.448264 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.448281 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.448305 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.448323 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.551370 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.551429 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.551448 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.551470 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.551487 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.654299 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.654449 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.654467 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.654488 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.654503 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.756635 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.756679 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.756691 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.756706 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.756718 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.859379 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.859443 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.859465 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.859491 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.859551 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.961806 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.961867 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.961886 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.961915 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:47 crc kubenswrapper[5129]: I1211 16:55:47.961934 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:47Z","lastTransitionTime":"2025-12-11T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.064906 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.065001 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.065024 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.065047 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.065065 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.167694 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.167760 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.167783 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.167809 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.167828 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.270611 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.270673 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.270691 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.270714 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.270731 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.303286 5129 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.373847 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.374090 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.374165 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.374192 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.374251 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.477534 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.477626 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.477648 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.477671 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.477688 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.520618 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.520660 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.520777 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.520814 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.520844 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.520912 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.520985 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.521134 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.579769 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.579830 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.579849 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.579872 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.579891 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.682128 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.682186 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.682212 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.682244 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.682265 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.784767 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.784842 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.784869 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.784899 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.784922 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.794226 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.794290 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.794318 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.794345 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.794366 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.808120 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.810831 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.810860 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.810869 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.810879 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.810888 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.823038 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.826711 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.826775 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.826800 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.826829 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.826854 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.840229 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.843649 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.843714 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.843738 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.843761 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.843777 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.857439 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.861436 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.861557 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.861578 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.861604 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.861622 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.874423 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff79d577-6c21-4103-ac1a-4d8d177a81d3\\\",\\\"systemUUID\\\":\\\"460ed1db-5810-4839-a957-07b4c992c443\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:48 crc kubenswrapper[5129]: E1211 16:55:48.874628 5129 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.888006 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.888061 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.888080 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.888103 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.888120 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.990914 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.990987 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.991014 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.991038 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:48 crc kubenswrapper[5129]: I1211 16:55:48.991056 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:48Z","lastTransitionTime":"2025-12-11T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.093895 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.094033 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.094059 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.094092 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.094117 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.196392 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.196488 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.196575 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.196613 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.196640 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.299099 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.299176 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.299196 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.299221 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.299240 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.402195 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.402267 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.402294 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.402326 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.402355 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.505586 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.505662 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.505690 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.505719 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.505744 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.608760 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.608906 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.608935 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.608967 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.608990 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.711297 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.711334 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.711342 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.711354 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.711363 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.813504 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.813603 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.813622 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.813645 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.813663 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.916353 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.916445 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.916466 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.916493 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:49 crc kubenswrapper[5129]: I1211 16:55:49.916510 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:49Z","lastTransitionTime":"2025-12-11T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.018088 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.018132 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.018142 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.018155 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.018167 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.120219 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.120265 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.120278 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.120293 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.120302 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.223190 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.223284 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.223310 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.223346 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.223371 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.327039 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.327121 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.327148 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.327178 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.327278 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.410134 5129 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.430189 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.430265 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.430289 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.430320 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.430345 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.519502 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.519502 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:50 crc kubenswrapper[5129]: E1211 16:55:50.519658 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.519885 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.519928 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:50 crc kubenswrapper[5129]: E1211 16:55:50.520110 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:50 crc kubenswrapper[5129]: E1211 16:55:50.520623 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:50 crc kubenswrapper[5129]: E1211 16:55:50.520732 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.521030 5129 scope.go:117] "RemoveContainer" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" Dec 11 16:55:50 crc kubenswrapper[5129]: E1211 16:55:50.521406 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.532859 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.532910 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.532924 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.532943 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.532962 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.635232 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.635302 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.635322 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.635347 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.635364 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.737384 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.737455 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.737480 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.737508 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.737532 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.839644 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.839715 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.839740 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.839771 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.839795 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.942419 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.942563 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.942592 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.942624 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:50 crc kubenswrapper[5129]: I1211 16:55:50.942648 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:50Z","lastTransitionTime":"2025-12-11T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.045269 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.045328 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.045348 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.045371 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.045389 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.147975 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.148040 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.148058 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.148082 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.148099 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.250953 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.250996 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.251007 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.251023 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.251034 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.274617 5129 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.353620 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.353688 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.353707 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.353733 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.353752 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.456126 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.456197 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.456216 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.456238 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.456258 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.559131 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.559211 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.559237 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.559268 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.559328 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.661689 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.662203 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.662678 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.662933 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.663214 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.766019 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.766082 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.766101 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.766125 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.766146 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.868258 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.868306 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.868320 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.868338 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.868350 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.939827 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m95zr" event={"ID":"5313889a-2681-4f68-96f8-d5dfea8d3a8b","Type":"ContainerStarted","Data":"9828ed0e44bb4b999d124985cebaf15596efde2fe8148192b73b4f18b49fb8ff"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.954418 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.963764 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t8chw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45767eb3-dd9a-4116-a1d6-a0e107c053ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xrc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t8chw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.970011 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.970057 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.970067 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.970083 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.970096 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:51Z","lastTransitionTime":"2025-12-11T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.976345 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb249f8f-9a28-4c68-91ed-0a729945afdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d512a17000ca709c3c084a435e8fcbecf28038516c0a11190f2385d68ae16fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5bdd0c143fa7e8812638159329a3e152d6d88c66c8e0fb790ae35c0ded8176e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5c934b2c22637164c8d767636f1daecb334588708bfe1bad7c8292922847f7ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.986452 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:51 crc kubenswrapper[5129]: I1211 16:55:51.995389 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.022954 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"264fc91e-68dd-4c06-8008-a8942f0078d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://5654ca63508057b717f80c16ebe5d6d0766d4282449ac01571c7a04945749180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1983a596cbbf41969328c6642b06b8abba3cc5ae8b162c4d87603de486e45587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e16a35e61d8e2ff1ef59921f54ada877c2429ae4dd9b1dfda1ef5de602cea580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://14aaa84bd234f14470da0a92e12408314e20785eb32082c15df56c66488831bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://09ab56e9007d2a650254d1000ce66094953c4e0e92b21cf18755434ff792f630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a161bcd71c7c245e0174c12fc079ce8789e12865fff5515d2d2033f44cc32c33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317f384632b4007b3a19c23c4e25109b8e9b742377084dfe984c85f66b93342b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f0680f85a12bdd516e7ffee1f5727fd1f8bc24aeff41fb3d03ce314592125b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.037059 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T16:55:21Z\\\",\\\"message\\\":\\\":extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911350 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:55:21.910728 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI1211 16:55:21.911482 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911587 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:55:21.911750 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\"\\\\nI1211 16:55:21.912325 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1515955072/tls.crt::/tmp/serving-cert-1515955072/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765472120\\\\\\\\\\\\\\\" (2025-12-11 16:55:20 +0000 UTC to 2025-12-11 16:55:21 +0000 UTC (now=2025-12-11 16:55:21.912292985 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914388 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1765472121\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1765472121\\\\\\\\\\\\\\\" (2025-12-11 15:55:21 +0000 UTC to 2028-12-11 15:55:21 +0000 UTC (now=2025-12-11 16:55:21.912560614 +0000 UTC))\\\\\\\"\\\\nI1211 16:55:21.914453 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI1211 16:55:21.914482 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1211 16:55:21.914557 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF1211 16:55:21.914774 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T16:55:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.053767 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.064800 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-spxfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0974084-197d-495d-b227-4ea7d61426c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lw8m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-spxfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.071962 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.072009 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.072026 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.072046 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.072062 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.077214 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f3b447-4c51-44f3-9ade-21b54c3a6daf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4nql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9gtgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.090926 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.100930 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fptr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15d52990-0733-45fe-ac96-429a9503dbab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ss94k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fptr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.120119 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfafb25-f61d-4c63-8e1e-9cba0778559a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2jpwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2khpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.131869 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-m95zr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5313889a-2681-4f68-96f8-d5dfea8d3a8b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://9828ed0e44bb4b999d124985cebaf15596efde2fe8148192b73b4f18b49fb8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:55:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdznl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m95zr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.150911 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5c4751-c0b7-476b-a553-042ed9d66177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r4lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sdzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.160869 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnwmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:55:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-h4rqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.172065 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"634ba037-86a0-4350-86e6-ff15f9395f74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32ffe9b5be1ad35ddd9febeb1f98d097ff984ae3bd337ebbbe14d99170d8489a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4c5571003912b3a12d9b8e7230f22fd588dae784e943736ea11373f2dcd2baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df646ec52f7a1cf49d9303ebccd8de6422fa94c4907a596b63278216fc07ebcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea318ea1aebaab356fe32d0f52eb66b4bbbfccfd6dc40b7426a345e78f9a77de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.174253 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.174435 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.174657 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.174826 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.174941 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.182933 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf7945d-7e6c-4b24-854b-268b781347c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8faab81a2b9f03a74368e14568cc8b7b928132eef181ee297d2fbad86f5fb194\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://341439f97d31ae57097f3e181c79c8fcd260be539f13fec078ba1450f2410b59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:54:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:54:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.202324 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.277573 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.277613 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.277639 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.277652 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.277661 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.380064 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.380107 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.380120 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.380137 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.380150 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.482996 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.483048 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.483066 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.483090 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.483103 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.520691 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:52 crc kubenswrapper[5129]: E1211 16:55:52.520787 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.520922 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:52 crc kubenswrapper[5129]: E1211 16:55:52.521124 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.521205 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:52 crc kubenswrapper[5129]: E1211 16:55:52.521325 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.521373 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:52 crc kubenswrapper[5129]: E1211 16:55:52.521461 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.586259 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.586307 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.586322 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.586339 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.586350 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.688304 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.688360 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.688376 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.688397 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.688412 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.791831 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.791876 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.791889 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.791907 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.791920 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.893808 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.893842 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.893852 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.893864 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.893873 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.943099 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750" exitCode=0 Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.943165 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.944997 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" event={"ID":"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e","Type":"ContainerStarted","Data":"925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.945048 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" event={"ID":"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e","Type":"ContainerStarted","Data":"ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.950655 5129 generic.go:358] "Generic (PLEG): container finished" podID="0e5c4751-c0b7-476b-a553-042ed9d66177" containerID="60ba4db3645e1259dadeba99e407cf389fbdcb4a3ed7eb8a5c5b2577df4d312e" exitCode=0 Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.950728 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerDied","Data":"60ba4db3645e1259dadeba99e407cf389fbdcb4a3ed7eb8a5c5b2577df4d312e"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.953341 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"e7d493542a27ce39f047d7d33b09babee5c2a80ae48255909529f246b9fc14bc"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.953370 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"e97e7afe9ef64b5b6f7fc723da62431ae38befc2999d9f578c52074877a6a761"} Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.956031 5129 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:55:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.995364 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.995899 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.995928 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.995961 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:52 crc kubenswrapper[5129]: I1211 16:55:52.995984 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:52Z","lastTransitionTime":"2025-12-11T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.018491 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=15.018465797 podStartE2EDuration="15.018465797s" podCreationTimestamp="2025-12-11 16:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:53.0053325 +0000 UTC m=+96.808862537" watchObservedRunningTime="2025-12-11 16:55:53.018465797 +0000 UTC m=+96.821995844" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.077630 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.077604201 podStartE2EDuration="15.077604201s" podCreationTimestamp="2025-12-11 16:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:53.075192456 +0000 UTC m=+96.878722483" watchObservedRunningTime="2025-12-11 16:55:53.077604201 +0000 UTC m=+96.881134228" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.098183 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.098234 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.098253 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.098275 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.098292 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.200026 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.200223 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.200310 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.200407 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.200496 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.208820 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-m95zr" podStartSLOduration=77.208803538 podStartE2EDuration="1m17.208803538s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:53.207792826 +0000 UTC m=+97.011322853" watchObservedRunningTime="2025-12-11 16:55:53.208803538 +0000 UTC m=+97.012333575" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.280061 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=15.280048197 podStartE2EDuration="15.280048197s" podCreationTimestamp="2025-12-11 16:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:53.269621083 +0000 UTC m=+97.073151100" watchObservedRunningTime="2025-12-11 16:55:53.280048197 +0000 UTC m=+97.083578214" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.280689 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.280681526 podStartE2EDuration="15.280681526s" podCreationTimestamp="2025-12-11 16:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:53.279705147 +0000 UTC m=+97.083235164" watchObservedRunningTime="2025-12-11 16:55:53.280681526 +0000 UTC m=+97.084211543" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.302119 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.302159 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.302170 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.302187 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.302196 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.317256 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" podStartSLOduration=77.31723314 podStartE2EDuration="1m17.31723314s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:53.315339661 +0000 UTC m=+97.118869678" watchObservedRunningTime="2025-12-11 16:55:53.31723314 +0000 UTC m=+97.120763177" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.404234 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.404269 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.404280 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.404295 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.404305 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.505772 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.505821 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.505838 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.505860 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.505878 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.608022 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.608071 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.608083 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.608099 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.608111 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.710221 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.710254 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.710270 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.710285 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.710296 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.812614 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.812661 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.812675 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.812691 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.812702 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.915346 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.915411 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.915435 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.915467 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.915487 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:53Z","lastTransitionTime":"2025-12-11T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.960089 5129 generic.go:358] "Generic (PLEG): container finished" podID="0e5c4751-c0b7-476b-a553-042ed9d66177" containerID="3855e005f45920e3256304548bf6bef920a4b893868aa4293479619e83693adb" exitCode=0 Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.960223 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerDied","Data":"3855e005f45920e3256304548bf6bef920a4b893868aa4293479619e83693adb"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.966037 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.966104 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.966125 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.966142 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.966159 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.966177 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.970374 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"2476ec25f135c337b3d37bf3dab44fb39ec5343138ffa7ec34d0d814e895e71d"} Dec 11 16:55:53 crc kubenswrapper[5129]: I1211 16:55:53.970421 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"eed0d8912372b478231534e18058ad24e8107a1a4294de3b20010b63410430cf"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.024041 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.024080 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.024090 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.024105 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.024114 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.127228 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.127554 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.127567 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.127581 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.127593 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.229756 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.229802 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.229813 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.229827 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.229836 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.332331 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.332398 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.332418 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.332444 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.332463 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.381656 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.381899 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.381860738 +0000 UTC m=+114.185390765 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.382130 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.382189 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.382280 5129 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.382325 5129 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.382341 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.382326282 +0000 UTC m=+114.185856299 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.382420 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.382408595 +0000 UTC m=+114.185938622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.435960 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.436031 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.436051 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.436075 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.436093 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.483758 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.483830 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484077 5129 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484168 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs podName:15d52990-0733-45fe-ac96-429a9503dbab nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.48414313 +0000 UTC m=+114.287673177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs") pod "network-metrics-daemon-fptr2" (UID: "15d52990-0733-45fe-ac96-429a9503dbab") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484198 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484256 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484284 5129 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484350 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484400 5129 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484411 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.484376406 +0000 UTC m=+114.287906463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484427 5129 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.484566 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.48449488 +0000 UTC m=+114.288024957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.484672 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.532892 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.532905 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.533091 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.533277 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.533404 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.533576 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.533939 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:54 crc kubenswrapper[5129]: E1211 16:55:54.534140 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.538801 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.538841 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.538853 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.538868 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.538882 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.645071 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.645122 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.645135 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.645155 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.645167 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.751474 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.751517 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.751543 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.751595 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.751629 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.854138 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.854395 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.854413 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.854436 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.854456 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.956894 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.956937 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.956948 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.956961 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.956972 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:54Z","lastTransitionTime":"2025-12-11T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.975728 5129 generic.go:358] "Generic (PLEG): container finished" podID="0e5c4751-c0b7-476b-a553-042ed9d66177" containerID="ad4381275a635dba7c4b9039fd5411448f61e083d948289c419f792f54d9709c" exitCode=0 Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.975798 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerDied","Data":"ad4381275a635dba7c4b9039fd5411448f61e083d948289c419f792f54d9709c"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.977070 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-spxfg" event={"ID":"a0974084-197d-495d-b227-4ea7d61426c6","Type":"ContainerStarted","Data":"88958f6b7c442859056da56b8a2f48ece7acfbf8bad6042c206904d33ef34513"} Dec 11 16:55:54 crc kubenswrapper[5129]: I1211 16:55:54.979736 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"60c4266e8539f30bbf7075e37472763bf1c2a8bd573c6bfabcf7890240bcfe28"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.003099 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podStartSLOduration=79.003081048 podStartE2EDuration="1m19.003081048s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:54.029987949 +0000 UTC m=+97.833517976" watchObservedRunningTime="2025-12-11 16:55:55.003081048 +0000 UTC m=+98.806611075" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.039726 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-spxfg" podStartSLOduration=80.039704204 podStartE2EDuration="1m20.039704204s" podCreationTimestamp="2025-12-11 16:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:55.037921239 +0000 UTC m=+98.841451286" watchObservedRunningTime="2025-12-11 16:55:55.039704204 +0000 UTC m=+98.843234241" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.059041 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.059080 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.059092 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.059108 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.059121 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.160972 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.161009 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.161018 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.161033 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.161042 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.263642 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.263923 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.263935 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.263951 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.263963 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.365909 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.365959 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.365970 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.365987 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.366000 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.469581 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.469616 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.469626 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.469638 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.469647 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.572397 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.572473 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.572502 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.572582 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.572604 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.674970 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.675016 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.675032 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.675054 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.675069 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.776274 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.776317 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.776330 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.776345 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.776357 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.878426 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.878496 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.878555 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.878588 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.878610 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.981022 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.981060 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.981070 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.981083 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.981092 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:55Z","lastTransitionTime":"2025-12-11T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.987922 5129 generic.go:358] "Generic (PLEG): container finished" podID="0e5c4751-c0b7-476b-a553-042ed9d66177" containerID="02a3fd744fb61eb954b7331598475211d9290393b5dd7525abe1fc9d4ce72e82" exitCode=0 Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.987991 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerDied","Data":"02a3fd744fb61eb954b7331598475211d9290393b5dd7525abe1fc9d4ce72e82"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.997943 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} Dec 11 16:55:55 crc kubenswrapper[5129]: I1211 16:55:55.999655 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t8chw" event={"ID":"45767eb3-dd9a-4116-a1d6-a0e107c053ac","Type":"ContainerStarted","Data":"f558f35586bcf4faf000f83e730d84a429a640151f945ba927e1579b08264e73"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.039729 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t8chw" podStartSLOduration=80.039697278 podStartE2EDuration="1m20.039697278s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:56.039655537 +0000 UTC m=+99.843185584" watchObservedRunningTime="2025-12-11 16:55:56.039697278 +0000 UTC m=+99.843227335" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.084102 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.084144 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.084156 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.084172 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.084185 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.190745 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.190788 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.190800 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.190817 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.190828 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.295853 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.295892 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.295901 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.295918 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.295928 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.398044 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.398135 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.398162 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.398188 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.398235 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.500419 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.500480 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.500498 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.500541 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.500561 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.523812 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:56 crc kubenswrapper[5129]: E1211 16:55:56.523911 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.523918 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.523961 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.524091 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:56 crc kubenswrapper[5129]: E1211 16:55:56.524082 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:56 crc kubenswrapper[5129]: E1211 16:55:56.524156 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:56 crc kubenswrapper[5129]: E1211 16:55:56.524205 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.603014 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.603070 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.603081 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.603096 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.603121 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.705636 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.705957 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.705968 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.705982 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.705996 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.808366 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.808402 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.808412 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.808426 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.808436 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.910316 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.910361 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.910380 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.910397 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:56 crc kubenswrapper[5129]: I1211 16:55:56.910406 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:56Z","lastTransitionTime":"2025-12-11T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.009111 5129 generic.go:358] "Generic (PLEG): container finished" podID="0e5c4751-c0b7-476b-a553-042ed9d66177" containerID="b83e8ced16a7609d3ce4f5ebb58ef53bf8114928d91d042e2a92c8e54d893760" exitCode=0 Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.009207 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerDied","Data":"b83e8ced16a7609d3ce4f5ebb58ef53bf8114928d91d042e2a92c8e54d893760"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.012583 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.012638 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.012663 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.012693 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.012718 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.115393 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.115458 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.115483 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.115550 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.115577 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.217022 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.217074 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.217084 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.217099 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.217110 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.318975 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.319062 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.319078 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.319095 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.319704 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.422401 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.422457 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.422478 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.422503 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.422610 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.524713 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.524753 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.524762 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.524774 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.524788 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.626764 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.626812 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.626835 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.626852 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.626863 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.730087 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.730156 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.730169 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.730204 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.730217 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.832561 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.832621 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.832636 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.832655 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.832668 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.934575 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.934606 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.934614 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.934627 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:57 crc kubenswrapper[5129]: I1211 16:55:57.934635 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:57Z","lastTransitionTime":"2025-12-11T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.019327 5129 generic.go:358] "Generic (PLEG): container finished" podID="0e5c4751-c0b7-476b-a553-042ed9d66177" containerID="201a5509670627b2378af706e5fe7760ad624dbc9a9adc62b5f4c6142d7849ab" exitCode=0 Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.019463 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerDied","Data":"201a5509670627b2378af706e5fe7760ad624dbc9a9adc62b5f4c6142d7849ab"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.032935 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerStarted","Data":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.033721 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.033950 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.033966 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.035275 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"0492823ee3160305a7e697ca102e2d9ea761fc46a40c095b24bc505594277abf"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.036347 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.036411 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.036425 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.036440 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.036452 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.068911 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.077273 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.109685 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podStartSLOduration=82.109655116 podStartE2EDuration="1m22.109655116s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:58.094362201 +0000 UTC m=+101.897892248" watchObservedRunningTime="2025-12-11 16:55:58.109655116 +0000 UTC m=+101.913185153" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.138270 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.138322 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.138336 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.138359 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.138371 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.239905 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.240112 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.240136 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.240149 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.240159 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.342866 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.342913 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.342925 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.342945 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.342957 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.445384 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.445424 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.445433 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.445446 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.445456 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.519483 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.519659 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.519703 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.519702 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:55:58 crc kubenswrapper[5129]: E1211 16:55:58.520480 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:55:58 crc kubenswrapper[5129]: E1211 16:55:58.520298 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:55:58 crc kubenswrapper[5129]: E1211 16:55:58.520623 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:55:58 crc kubenswrapper[5129]: E1211 16:55:58.520324 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.547281 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.547325 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.547335 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.547348 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.547358 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.651378 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.651428 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.651446 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.651473 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.651489 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.754434 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.754497 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.754543 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.754563 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.754573 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.856577 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.856617 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.856627 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.856642 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.856651 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.958929 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.958968 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.958980 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.958998 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:58 crc kubenswrapper[5129]: I1211 16:55:58.959010 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:58Z","lastTransitionTime":"2025-12-11T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.043738 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" event={"ID":"0e5c4751-c0b7-476b-a553-042ed9d66177","Type":"ContainerStarted","Data":"0a78dbd9253436feedc08072b237fdc1bd24889052dfb2a3e43a105ea2cae643"} Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.061694 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.061752 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.061771 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.061793 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.061809 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:59Z","lastTransitionTime":"2025-12-11T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.070082 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-sdzh7" podStartSLOduration=83.070062211 podStartE2EDuration="1m23.070062211s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:55:59.068094521 +0000 UTC m=+102.871624578" watchObservedRunningTime="2025-12-11 16:55:59.070062211 +0000 UTC m=+102.873592248" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.163624 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.163701 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.163733 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.163761 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.163782 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:59Z","lastTransitionTime":"2025-12-11T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.205592 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.205639 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.205651 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.205669 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.205683 5129 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:55:59Z","lastTransitionTime":"2025-12-11T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.260690 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6"] Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.439379 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.443973 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.444091 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.444155 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.444360 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.545314 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.545367 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.545403 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.545447 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.545499 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.549389 5129 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.558327 5129 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.646815 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.646895 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.646966 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.647095 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.647168 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.647309 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.647335 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.648412 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.664338 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.669622 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccd70a63-4d7a-4002-a8cd-ad551aa2aef8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-q7fm6\" (UID: \"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:55:59 crc kubenswrapper[5129]: I1211 16:55:59.767691 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.047471 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" event={"ID":"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8","Type":"ContainerStarted","Data":"bb7c3cd19757772e05c065862e2f13e8383a47cfa833f58e639e1d139a3af7ae"} Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.047534 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" event={"ID":"ccd70a63-4d7a-4002-a8cd-ad551aa2aef8","Type":"ContainerStarted","Data":"070509a497bc4cc4f16c57b9ae3048344a9aa504381816e93d49b3119f476c4c"} Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.324295 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-q7fm6" podStartSLOduration=84.324277767 podStartE2EDuration="1m24.324277767s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:00.062476211 +0000 UTC m=+103.866006228" watchObservedRunningTime="2025-12-11 16:56:00.324277767 +0000 UTC m=+104.127807784" Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.324735 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fptr2"] Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.324921 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:56:00 crc kubenswrapper[5129]: E1211 16:56:00.325067 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.524819 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.525173 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:00 crc kubenswrapper[5129]: I1211 16:56:00.525233 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:56:00 crc kubenswrapper[5129]: E1211 16:56:00.525328 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:56:00 crc kubenswrapper[5129]: E1211 16:56:00.525459 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:56:00 crc kubenswrapper[5129]: E1211 16:56:00.525187 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:56:02 crc kubenswrapper[5129]: I1211 16:56:02.520046 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:56:02 crc kubenswrapper[5129]: I1211 16:56:02.520369 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:56:02 crc kubenswrapper[5129]: E1211 16:56:02.520469 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:56:02 crc kubenswrapper[5129]: I1211 16:56:02.520637 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:02 crc kubenswrapper[5129]: E1211 16:56:02.520676 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:56:02 crc kubenswrapper[5129]: I1211 16:56:02.520723 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:02 crc kubenswrapper[5129]: E1211 16:56:02.520700 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:56:02 crc kubenswrapper[5129]: I1211 16:56:02.520738 5129 scope.go:117] "RemoveContainer" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" Dec 11 16:56:02 crc kubenswrapper[5129]: E1211 16:56:02.520767 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.066910 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.070094 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9"} Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.070818 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.102617 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.102587241 podStartE2EDuration="26.102587241s" podCreationTimestamp="2025-12-11 16:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:04.099273928 +0000 UTC m=+107.902803965" watchObservedRunningTime="2025-12-11 16:56:04.102587241 +0000 UTC m=+107.906117308" Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.519932 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.519963 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:04 crc kubenswrapper[5129]: E1211 16:56:04.520035 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fptr2" podUID="15d52990-0733-45fe-ac96-429a9503dbab" Dec 11 16:56:04 crc kubenswrapper[5129]: E1211 16:56:04.520096 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.520178 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:04 crc kubenswrapper[5129]: I1211 16:56:04.520207 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:56:04 crc kubenswrapper[5129]: E1211 16:56:04.520350 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:56:04 crc kubenswrapper[5129]: E1211 16:56:04.520462 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.731410 5129 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.732024 5129 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.775091 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cvmcg"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.788021 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-766lv"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.788392 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.792456 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.792912 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.794393 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.794434 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.794698 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.795024 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.795285 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.798877 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.799328 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.800705 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.805152 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-nmqql"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.808929 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.809002 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.813163 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-jhm42"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.813314 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-nmqql" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.816993 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.817891 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-glzzm"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.818797 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.821701 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-np8vg"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.822993 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.823110 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.835909 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5lsw5"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.840565 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.840716 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.840759 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.857062 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.857710 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.871690 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.872717 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.879712 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.879748 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.880077 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.881529 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.881681 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.884916 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fmrcf"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.885225 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.885381 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.888171 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cvmcg"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.888213 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.891055 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.891387 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.899194 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kjgpd"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.899696 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.899863 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.902964 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.917354 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.917494 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.917665 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.917788 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.917799 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.920397 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.924736 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.924979 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.925287 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.925541 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.925919 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.925968 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c14a3d-971c-4b59-8898-6eca22abae48-serving-cert\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926014 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926038 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bdm\" (UniqueName: \"kubernetes.io/projected/c4c14a3d-971c-4b59-8898-6eca22abae48-kube-api-access-f6bdm\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926061 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926063 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/995ea4e8-ce15-451c-b499-8fb323605af8-serving-cert\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926071 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926085 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759jr\" (UniqueName: \"kubernetes.io/projected/d7ed60a5-b258-460e-9fc3-2461aaa4cf12-kube-api-access-759jr\") pod \"cluster-samples-operator-6b564684c8-sz8rd\" (UID: \"d7ed60a5-b258-460e-9fc3-2461aaa4cf12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926107 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-etcd-client\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926144 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926173 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a34c1a2-a365-45a2-85bf-6946a16dcb01-config\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926206 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-console-config\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926224 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-serving-cert\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926236 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926287 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926248 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926348 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926364 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-oauth-serving-cert\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926394 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-image-import-ca\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926409 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3296910c-dd32-4d25-b939-00b76a3fe0a5-trusted-ca\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926429 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-dir\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926444 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a34c1a2-a365-45a2-85bf-6946a16dcb01-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926464 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926478 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-encryption-config\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926496 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfrn\" (UniqueName: \"kubernetes.io/projected/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-kube-api-access-2gfrn\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926610 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926648 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhcv\" (UniqueName: \"kubernetes.io/projected/46c1607c-6eed-4090-b499-6751db7a0e69-kube-api-access-dnhcv\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926684 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46c1607c-6eed-4090-b499-6751db7a0e69-node-pullsecrets\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926723 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-config\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926742 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926756 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-config\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926781 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-audit\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926797 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-service-ca\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926815 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926836 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94785b3a-cb7c-426f-ab27-b74c298f40f2-tmp\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926851 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-policies\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926866 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7ed60a5-b258-460e-9fc3-2461aaa4cf12-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sz8rd\" (UID: \"d7ed60a5-b258-460e-9fc3-2461aaa4cf12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926882 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83730945-5deb-4b14-988b-24d05e851543-console-serving-cert\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926897 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snhb\" (UniqueName: \"kubernetes.io/projected/995ea4e8-ce15-451c-b499-8fb323605af8-kube-api-access-8snhb\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926919 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l44cs\" (UniqueName: \"kubernetes.io/projected/94785b3a-cb7c-426f-ab27-b74c298f40f2-kube-api-access-l44cs\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926934 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926950 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-client-ca\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926965 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/995ea4e8-ce15-451c-b499-8fb323605af8-available-featuregates\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.926985 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-config\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927001 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927018 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927039 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927056 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9x5\" (UniqueName: \"kubernetes.io/projected/83730945-5deb-4b14-988b-24d05e851543-kube-api-access-7n9x5\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927072 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83730945-5deb-4b14-988b-24d05e851543-console-oauth-config\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927087 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgqr\" (UniqueName: \"kubernetes.io/projected/ed3c1960-3512-45f3-ba99-e79179060051-kube-api-access-scgqr\") pod \"downloads-747b44746d-nmqql\" (UID: \"ed3c1960-3512-45f3-ba99-e79179060051\") " pod="openshift-console/downloads-747b44746d-nmqql" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927101 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927115 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296910c-dd32-4d25-b939-00b76a3fe0a5-config\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927132 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927147 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46c1607c-6eed-4090-b499-6751db7a0e69-audit-dir\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927170 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94785b3a-cb7c-426f-ab27-b74c298f40f2-serving-cert\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927188 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296910c-dd32-4d25-b939-00b76a3fe0a5-serving-cert\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927202 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927217 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-trusted-ca-bundle\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927241 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a34c1a2-a365-45a2-85bf-6946a16dcb01-images\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927255 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7flh\" (UniqueName: \"kubernetes.io/projected/2a34c1a2-a365-45a2-85bf-6946a16dcb01-kube-api-access-r7flh\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.927269 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkh7\" (UniqueName: \"kubernetes.io/projected/3296910c-dd32-4d25-b939-00b76a3fe0a5-kube-api-access-6jkh7\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.930150 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.930603 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.930797 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.930989 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.931219 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.931878 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.931906 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.932058 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.932093 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.932210 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.932218 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.932423 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.932431 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.932977 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.933016 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.933261 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.933392 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.933558 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.933706 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.933981 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.934098 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.934252 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.938911 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939214 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939277 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939371 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939462 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939550 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939683 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939747 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939796 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939843 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939895 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.939934 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940094 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940161 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940229 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940329 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940432 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940526 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940584 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940618 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940712 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940780 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940792 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940854 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940904 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940963 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.940983 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941047 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941054 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941127 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941205 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941310 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941382 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941467 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941565 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941620 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941710 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941726 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941790 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941857 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.941929 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.942102 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.942854 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.943038 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-87vjc"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.944108 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.946366 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.946751 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.948426 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.948675 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.954611 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9"] Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.955084 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.955202 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.955406 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.955776 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.958134 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.960448 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.985842 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.995612 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 11 16:56:05 crc kubenswrapper[5129]: I1211 16:56:05.996269 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:05.978156 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.003282 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-nmqql"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.003314 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-jhm42"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.003325 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.003346 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-xcrfz"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.006027 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.012758 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.013931 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.015281 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.015427 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.015659 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.015826 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.017484 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.017974 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.018496 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.021149 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.021302 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.021883 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.023176 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.024269 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.024420 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027156 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027426 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027819 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027870 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9x5\" (UniqueName: \"kubernetes.io/projected/83730945-5deb-4b14-988b-24d05e851543-kube-api-access-7n9x5\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027894 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83730945-5deb-4b14-988b-24d05e851543-console-oauth-config\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027912 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-scgqr\" (UniqueName: \"kubernetes.io/projected/ed3c1960-3512-45f3-ba99-e79179060051-kube-api-access-scgqr\") pod \"downloads-747b44746d-nmqql\" (UID: \"ed3c1960-3512-45f3-ba99-e79179060051\") " pod="openshift-console/downloads-747b44746d-nmqql" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027929 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.027967 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296910c-dd32-4d25-b939-00b76a3fe0a5-config\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028126 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-serving-cert\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028155 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028177 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46c1607c-6eed-4090-b499-6751db7a0e69-audit-dir\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028196 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da272c91-3742-497e-b116-40d44d676527-serving-cert\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028227 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94785b3a-cb7c-426f-ab27-b74c298f40f2-serving-cert\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028258 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296910c-dd32-4d25-b939-00b76a3fe0a5-serving-cert\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028279 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028297 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-trusted-ca-bundle\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028313 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a34c1a2-a365-45a2-85bf-6946a16dcb01-images\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028328 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7flh\" (UniqueName: \"kubernetes.io/projected/2a34c1a2-a365-45a2-85bf-6946a16dcb01-kube-api-access-r7flh\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028343 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkh7\" (UniqueName: \"kubernetes.io/projected/3296910c-dd32-4d25-b939-00b76a3fe0a5-kube-api-access-6jkh7\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028370 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c14a3d-971c-4b59-8898-6eca22abae48-serving-cert\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028397 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028413 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6bdm\" (UniqueName: \"kubernetes.io/projected/c4c14a3d-971c-4b59-8898-6eca22abae48-kube-api-access-f6bdm\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028428 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/995ea4e8-ce15-451c-b499-8fb323605af8-serving-cert\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028447 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-etcd-serving-ca\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028462 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-759jr\" (UniqueName: \"kubernetes.io/projected/d7ed60a5-b258-460e-9fc3-2461aaa4cf12-kube-api-access-759jr\") pod \"cluster-samples-operator-6b564684c8-sz8rd\" (UID: \"d7ed60a5-b258-460e-9fc3-2461aaa4cf12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028644 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.028986 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296910c-dd32-4d25-b939-00b76a3fe0a5-config\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029169 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a34c1a2-a365-45a2-85bf-6946a16dcb01-images\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029257 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-etcd-client\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029283 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029311 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-encryption-config\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029329 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029345 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a34c1a2-a365-45a2-85bf-6946a16dcb01-config\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029400 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-etcd-client\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029417 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72v6\" (UniqueName: \"kubernetes.io/projected/a4a1ad2a-71de-426e-a205-d2cf008a150b-kube-api-access-m72v6\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029442 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-console-config\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029457 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-serving-cert\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029473 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxht\" (UniqueName: \"kubernetes.io/projected/fff5aa2a-7859-43b5-9a93-a567567a9270-kube-api-access-mdxht\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029495 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029527 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029548 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-oauth-serving-cert\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029569 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bc8ddee-33ed-4b56-a439-7ba8e704624b-audit-dir\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029587 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-client-ca\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029604 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029623 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-image-import-ca\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029640 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3296910c-dd32-4d25-b939-00b76a3fe0a5-trusted-ca\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029766 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mtq\" (UniqueName: \"kubernetes.io/projected/da272c91-3742-497e-b116-40d44d676527-kube-api-access-72mtq\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029788 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-dir\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029807 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a34c1a2-a365-45a2-85bf-6946a16dcb01-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029825 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff5aa2a-7859-43b5-9a93-a567567a9270-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029849 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029897 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-encryption-config\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029929 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-config\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029947 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2gfrn\" (UniqueName: \"kubernetes.io/projected/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-kube-api-access-2gfrn\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029962 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029971 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-trusted-ca-bundle\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.029988 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030029 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dnhcv\" (UniqueName: \"kubernetes.io/projected/46c1607c-6eed-4090-b499-6751db7a0e69-kube-api-access-dnhcv\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030129 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46c1607c-6eed-4090-b499-6751db7a0e69-node-pullsecrets\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030162 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-config\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030183 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030206 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-config\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030229 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030258 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-audit\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030279 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a4a1ad2a-71de-426e-a205-d2cf008a150b-metrics-tls\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030299 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-service-ca\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030319 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030346 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030365 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-audit-policies\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030384 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/4bc8ddee-33ed-4b56-a439-7ba8e704624b-kube-api-access-rnhll\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030411 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff5aa2a-7859-43b5-9a93-a567567a9270-config\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030451 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94785b3a-cb7c-426f-ab27-b74c298f40f2-tmp\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030472 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-policies\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030493 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7ed60a5-b258-460e-9fc3-2461aaa4cf12-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sz8rd\" (UID: \"d7ed60a5-b258-460e-9fc3-2461aaa4cf12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030533 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83730945-5deb-4b14-988b-24d05e851543-console-serving-cert\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030557 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8snhb\" (UniqueName: \"kubernetes.io/projected/995ea4e8-ce15-451c-b499-8fb323605af8-kube-api-access-8snhb\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030578 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mq4g\" (UniqueName: \"kubernetes.io/projected/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-kube-api-access-2mq4g\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030611 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l44cs\" (UniqueName: \"kubernetes.io/projected/94785b3a-cb7c-426f-ab27-b74c298f40f2-kube-api-access-l44cs\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030632 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030655 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-client-ca\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030673 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/995ea4e8-ce15-451c-b499-8fb323605af8-available-featuregates\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030692 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/da272c91-3742-497e-b116-40d44d676527-tmp\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030723 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-config\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030745 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4a1ad2a-71de-426e-a205-d2cf008a150b-tmp-dir\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030771 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.030797 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.031470 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.032128 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46c1607c-6eed-4090-b499-6751db7a0e69-audit-dir\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.032961 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-audit\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.033199 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94785b3a-cb7c-426f-ab27-b74c298f40f2-tmp\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.033558 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.033689 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46c1607c-6eed-4090-b499-6751db7a0e69-node-pullsecrets\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.033777 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-policies\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.034437 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a34c1a2-a365-45a2-85bf-6946a16dcb01-config\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.034571 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.034908 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-client-ca\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.035259 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/995ea4e8-ce15-451c-b499-8fb323605af8-available-featuregates\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.036075 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-config\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.036551 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-config\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.036790 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.037963 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-service-ca\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.038047 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-console-config\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.041748 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.042186 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4c14a3d-971c-4b59-8898-6eca22abae48-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.042256 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83730945-5deb-4b14-988b-24d05e851543-oauth-serving-cert\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.042301 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-image-import-ca\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.042488 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.042808 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-encryption-config\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.042991 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94785b3a-cb7c-426f-ab27-b74c298f40f2-serving-cert\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.043473 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3296910c-dd32-4d25-b939-00b76a3fe0a5-trusted-ca\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.043993 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.044194 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.044199 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.044491 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-dir\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.044836 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-etcd-client\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.044892 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.044911 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.045078 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296910c-dd32-4d25-b939-00b76a3fe0a5-serving-cert\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.045114 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.045423 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.045409 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c14a3d-971c-4b59-8898-6eca22abae48-serving-cert\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.045464 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d7ed60a5-b258-460e-9fc3-2461aaa4cf12-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sz8rd\" (UID: \"d7ed60a5-b258-460e-9fc3-2461aaa4cf12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.045717 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c1607c-6eed-4090-b499-6751db7a0e69-config\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.047827 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.047957 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83730945-5deb-4b14-988b-24d05e851543-console-oauth-config\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.050336 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-hxxwl"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.051100 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.051157 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83730945-5deb-4b14-988b-24d05e851543-console-serving-cert\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.051444 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.051493 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/995ea4e8-ce15-451c-b499-8fb323605af8-serving-cert\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.051798 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.055770 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.055814 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.056007 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.056155 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a34c1a2-a365-45a2-85bf-6946a16dcb01-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.058785 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c1607c-6eed-4090-b499-6751db7a0e69-serving-cert\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.060035 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dkmmj"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.060163 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.064473 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.064614 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.069270 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fmrcf"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.069298 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.069312 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.069461 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.070133 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.072705 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.072808 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.072822 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.072828 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.072924 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.075940 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.075964 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.076084 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.078602 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-np8vg"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.078623 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.079154 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.081437 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.081455 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-766lv"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.081464 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-glzzm"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.081473 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.081483 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5lsw5"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.081492 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.081631 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.084744 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xjqrz"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.084868 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.087769 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-98qvs"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.087856 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.090976 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-87vjc"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.091025 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.091036 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.091048 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-964vr"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.091187 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097561 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-hxxwl"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097618 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097648 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097673 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097733 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097791 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097829 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kjgpd"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097940 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097966 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097976 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.097987 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-92n27"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.106923 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xrzh8"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.107243 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.110284 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.112176 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.112294 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.112347 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.112373 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dkmmj"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.112407 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-l89p6"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.115857 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.115883 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.115894 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.115904 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xjqrz"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.115984 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.116852 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xrzh8"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.118091 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-964vr"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.119301 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-92n27"] Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.130733 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131411 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/da272c91-3742-497e-b116-40d44d676527-tmp\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131500 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4a1ad2a-71de-426e-a205-d2cf008a150b-tmp-dir\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131564 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-serving-cert\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131591 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da272c91-3742-497e-b116-40d44d676527-serving-cert\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131638 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-etcd-serving-ca\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131661 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131696 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-encryption-config\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131719 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-etcd-client\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131743 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m72v6\" (UniqueName: \"kubernetes.io/projected/a4a1ad2a-71de-426e-a205-d2cf008a150b-kube-api-access-m72v6\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131779 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxht\" (UniqueName: \"kubernetes.io/projected/fff5aa2a-7859-43b5-9a93-a567567a9270-kube-api-access-mdxht\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131808 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bc8ddee-33ed-4b56-a439-7ba8e704624b-audit-dir\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131833 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-client-ca\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131875 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131914 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/da272c91-3742-497e-b116-40d44d676527-tmp\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.131980 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bc8ddee-33ed-4b56-a439-7ba8e704624b-audit-dir\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.132040 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-72mtq\" (UniqueName: \"kubernetes.io/projected/da272c91-3742-497e-b116-40d44d676527-kube-api-access-72mtq\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.132080 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff5aa2a-7859-43b5-9a93-a567567a9270-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.132195 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-config\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.132536 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4a1ad2a-71de-426e-a205-d2cf008a150b-tmp-dir\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.132704 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-etcd-serving-ca\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.132889 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-client-ca\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.133489 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-config\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.132220 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.133623 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.133647 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a4a1ad2a-71de-426e-a205-d2cf008a150b-metrics-tls\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.133731 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134116 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134130 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134197 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-audit-policies\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134229 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/4bc8ddee-33ed-4b56-a439-7ba8e704624b-kube-api-access-rnhll\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134253 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff5aa2a-7859-43b5-9a93-a567567a9270-config\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134294 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2mq4g\" (UniqueName: \"kubernetes.io/projected/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-kube-api-access-2mq4g\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134807 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bc8ddee-33ed-4b56-a439-7ba8e704624b-audit-policies\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.134869 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff5aa2a-7859-43b5-9a93-a567567a9270-config\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.135109 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff5aa2a-7859-43b5-9a93-a567567a9270-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.135110 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da272c91-3742-497e-b116-40d44d676527-serving-cert\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.135609 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-etcd-client\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.136252 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-encryption-config\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.136463 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc8ddee-33ed-4b56-a439-7ba8e704624b-serving-cert\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.137534 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a4a1ad2a-71de-426e-a205-d2cf008a150b-metrics-tls\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.156812 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.164325 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.170616 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.190062 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.196039 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.210724 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.230693 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.250782 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.270496 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.290598 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.310365 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.350833 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.371757 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.390965 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.411228 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.431620 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.450694 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.471631 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.490585 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.510915 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.520675 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.520713 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.520781 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.520787 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.535144 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.551646 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.571859 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.590823 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.611146 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.629948 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.651418 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.670857 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.690801 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.710656 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.731662 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.751313 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.771240 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.790277 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.811538 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.831584 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.851210 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.891242 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-scgqr\" (UniqueName: \"kubernetes.io/projected/ed3c1960-3512-45f3-ba99-e79179060051-kube-api-access-scgqr\") pod \"downloads-747b44746d-nmqql\" (UID: \"ed3c1960-3512-45f3-ba99-e79179060051\") " pod="openshift-console/downloads-747b44746d-nmqql" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.907539 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9x5\" (UniqueName: \"kubernetes.io/projected/83730945-5deb-4b14-988b-24d05e851543-kube-api-access-7n9x5\") pod \"console-64d44f6ddf-jhm42\" (UID: \"83730945-5deb-4b14-988b-24d05e851543\") " pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.924741 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7flh\" (UniqueName: \"kubernetes.io/projected/2a34c1a2-a365-45a2-85bf-6946a16dcb01-kube-api-access-r7flh\") pod \"machine-api-operator-755bb95488-cvmcg\" (UID: \"2a34c1a2-a365-45a2-85bf-6946a16dcb01\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.945321 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkh7\" (UniqueName: \"kubernetes.io/projected/3296910c-dd32-4d25-b939-00b76a3fe0a5-kube-api-access-6jkh7\") pod \"console-operator-67c89758df-5lsw5\" (UID: \"3296910c-dd32-4d25-b939-00b76a3fe0a5\") " pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.965049 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6bdm\" (UniqueName: \"kubernetes.io/projected/c4c14a3d-971c-4b59-8898-6eca22abae48-kube-api-access-f6bdm\") pod \"authentication-operator-7f5c659b84-nlzgz\" (UID: \"c4c14a3d-971c-4b59-8898-6eca22abae48\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:06 crc kubenswrapper[5129]: I1211 16:56:06.986013 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-759jr\" (UniqueName: \"kubernetes.io/projected/d7ed60a5-b258-460e-9fc3-2461aaa4cf12-kube-api-access-759jr\") pod \"cluster-samples-operator-6b564684c8-sz8rd\" (UID: \"d7ed60a5-b258-460e-9fc3-2461aaa4cf12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.007858 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l44cs\" (UniqueName: \"kubernetes.io/projected/94785b3a-cb7c-426f-ab27-b74c298f40f2-kube-api-access-l44cs\") pod \"route-controller-manager-776cdc94d6-jfgtl\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.014141 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.026465 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnhcv\" (UniqueName: \"kubernetes.io/projected/46c1607c-6eed-4090-b499-6751db7a0e69-kube-api-access-dnhcv\") pod \"apiserver-9ddfb9f55-np8vg\" (UID: \"46c1607c-6eed-4090-b499-6751db7a0e69\") " pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.049008 5129 request.go:752] "Waited before sending request" delay="1.010456359s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.051876 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8snhb\" (UniqueName: \"kubernetes.io/projected/995ea4e8-ce15-451c-b499-8fb323605af8-kube-api-access-8snhb\") pod \"openshift-config-operator-5777786469-766lv\" (UID: \"995ea4e8-ce15-451c-b499-8fb323605af8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.056798 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.070042 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.070438 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gfrn\" (UniqueName: \"kubernetes.io/projected/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-kube-api-access-2gfrn\") pod \"oauth-openshift-66458b6674-glzzm\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.078767 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.092213 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.093661 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.110665 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.118311 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-nmqql" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.181860 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.182789 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.183883 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.184765 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.184925 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.185404 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.190195 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.211690 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.212254 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.225194 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.230751 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.254897 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.273828 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.291844 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.311349 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.326028 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cvmcg"] Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.341848 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.345116 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-766lv"] Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.351293 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.372345 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.394033 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.411039 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.431894 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.456256 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.471637 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.491012 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.499386 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-nmqql"] Dec 11 16:56:07 crc kubenswrapper[5129]: W1211 16:56:07.505560 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded3c1960_3512_45f3_ba99_e79179060051.slice/crio-3d87c60a4137791d8094a80117c4542f57009aff29a7d90990a42d45c57a9a67 WatchSource:0}: Error finding container 3d87c60a4137791d8094a80117c4542f57009aff29a7d90990a42d45c57a9a67: Status 404 returned error can't find the container with id 3d87c60a4137791d8094a80117c4542f57009aff29a7d90990a42d45c57a9a67 Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.510807 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.530387 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.545495 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-np8vg"] Dec 11 16:56:07 crc kubenswrapper[5129]: W1211 16:56:07.553195 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c1607c_6eed_4090_b499_6751db7a0e69.slice/crio-16371a056557a0f67dd09b3301edf3ce6c7efac18a0df13ad6c02c24ade39099 WatchSource:0}: Error finding container 16371a056557a0f67dd09b3301edf3ce6c7efac18a0df13ad6c02c24ade39099: Status 404 returned error can't find the container with id 16371a056557a0f67dd09b3301edf3ce6c7efac18a0df13ad6c02c24ade39099 Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.554616 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.571824 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.591327 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.593672 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl"] Dec 11 16:56:07 crc kubenswrapper[5129]: W1211 16:56:07.596866 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94785b3a_cb7c_426f_ab27_b74c298f40f2.slice/crio-3992e041d562bb7e18ffe83af590bd174a44ceec6acb7df7202e0753f5fad166 WatchSource:0}: Error finding container 3992e041d562bb7e18ffe83af590bd174a44ceec6acb7df7202e0753f5fad166: Status 404 returned error can't find the container with id 3992e041d562bb7e18ffe83af590bd174a44ceec6acb7df7202e0753f5fad166 Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.610597 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.631026 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.651375 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.676475 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.691023 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.707344 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz"] Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.710684 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.711082 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-glzzm"] Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.711717 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-jhm42"] Dec 11 16:56:07 crc kubenswrapper[5129]: W1211 16:56:07.717959 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83730945_5deb_4b14_988b_24d05e851543.slice/crio-1093d3623e426d6639a07d2689c271a335879524f32233e1b54f710bdfba03a9 WatchSource:0}: Error finding container 1093d3623e426d6639a07d2689c271a335879524f32233e1b54f710bdfba03a9: Status 404 returned error can't find the container with id 1093d3623e426d6639a07d2689c271a335879524f32233e1b54f710bdfba03a9 Dec 11 16:56:07 crc kubenswrapper[5129]: W1211 16:56:07.718968 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40ca3ab4_d0e2_45dd_896c_d688cfc10b10.slice/crio-30224e3ba4cf648f9135dc8048922a5d4d6bc4f601ad6f5c37ca59b58314197b WatchSource:0}: Error finding container 30224e3ba4cf648f9135dc8048922a5d4d6bc4f601ad6f5c37ca59b58314197b: Status 404 returned error can't find the container with id 30224e3ba4cf648f9135dc8048922a5d4d6bc4f601ad6f5c37ca59b58314197b Dec 11 16:56:07 crc kubenswrapper[5129]: W1211 16:56:07.720662 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4c14a3d_971c_4b59_8898_6eca22abae48.slice/crio-f8dc2b9a80c96e7dd86b960b0f83ccf9369a8822a34f2b95f338e8b2da20da12 WatchSource:0}: Error finding container f8dc2b9a80c96e7dd86b960b0f83ccf9369a8822a34f2b95f338e8b2da20da12: Status 404 returned error can't find the container with id f8dc2b9a80c96e7dd86b960b0f83ccf9369a8822a34f2b95f338e8b2da20da12 Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.730782 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.750143 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.752701 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd"] Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.753822 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5lsw5"] Dec 11 16:56:07 crc kubenswrapper[5129]: W1211 16:56:07.759693 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3296910c_dd32_4d25_b939_00b76a3fe0a5.slice/crio-c23778d01c7ac133bf92cee8a930a9d17a64e77a10a9ba00da021475567c0267 WatchSource:0}: Error finding container c23778d01c7ac133bf92cee8a930a9d17a64e77a10a9ba00da021475567c0267: Status 404 returned error can't find the container with id c23778d01c7ac133bf92cee8a930a9d17a64e77a10a9ba00da021475567c0267 Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.790888 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.810414 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.831141 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.851211 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.871137 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.900873 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.910191 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.930394 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.950796 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.972098 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 11 16:56:07 crc kubenswrapper[5129]: I1211 16:56:07.991362 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.011162 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.033526 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.050343 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.068746 5129 request.go:752] "Waited before sending request" delay="1.952564028s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.070399 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.090561 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" event={"ID":"d7ed60a5-b258-460e-9fc3-2461aaa4cf12","Type":"ContainerStarted","Data":"bf2c7b8ae283a2dc94c8369dd44c67140b2f430d194f9fe7d3aa39d63c73f946"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.090600 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" event={"ID":"d7ed60a5-b258-460e-9fc3-2461aaa4cf12","Type":"ContainerStarted","Data":"83db91592a8631eb71cd8491f4e280ecf65a9d2181a2a582ab32427f43c9db98"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.094232 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" event={"ID":"40ca3ab4-d0e2-45dd-896c-d688cfc10b10","Type":"ContainerStarted","Data":"7192e168ee4bf907a85a42399f2fd8be30b89bc4eb15cf74b2861f656c2896db"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.094468 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" event={"ID":"40ca3ab4-d0e2-45dd-896c-d688cfc10b10","Type":"ContainerStarted","Data":"30224e3ba4cf648f9135dc8048922a5d4d6bc4f601ad6f5c37ca59b58314197b"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.095117 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.101002 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" event={"ID":"94785b3a-cb7c-426f-ab27-b74c298f40f2","Type":"ContainerStarted","Data":"7c3d6ffc49391070c1e2f5c389a49688d9ef021c6aba9f7fd9a1ff87f2a816c3"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.101034 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" event={"ID":"94785b3a-cb7c-426f-ab27-b74c298f40f2","Type":"ContainerStarted","Data":"3992e041d562bb7e18ffe83af590bd174a44ceec6acb7df7202e0753f5fad166"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.102202 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.103488 5129 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-glzzm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.103563 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" podUID="40ca3ab4-d0e2-45dd-896c-d688cfc10b10" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.105242 5129 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-jfgtl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.105548 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" podUID="94785b3a-cb7c-426f-ab27-b74c298f40f2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.106901 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-jhm42" event={"ID":"83730945-5deb-4b14-988b-24d05e851543","Type":"ContainerStarted","Data":"84e95f56ddcd33d979b0fb921445e90dbdfa062c8fd54274aa075ecd157d36f7"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.106936 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-jhm42" event={"ID":"83730945-5deb-4b14-988b-24d05e851543","Type":"ContainerStarted","Data":"1093d3623e426d6639a07d2689c271a335879524f32233e1b54f710bdfba03a9"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.113844 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" event={"ID":"c4c14a3d-971c-4b59-8898-6eca22abae48","Type":"ContainerStarted","Data":"2250820750a3772657729106caaa2d8959e0ad63827bbd9cec6f44af6b370018"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.113888 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" event={"ID":"c4c14a3d-971c-4b59-8898-6eca22abae48","Type":"ContainerStarted","Data":"f8dc2b9a80c96e7dd86b960b0f83ccf9369a8822a34f2b95f338e8b2da20da12"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.116300 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxht\" (UniqueName: \"kubernetes.io/projected/fff5aa2a-7859-43b5-9a93-a567567a9270-kube-api-access-mdxht\") pod \"openshift-apiserver-operator-846cbfc458-g94wz\" (UID: \"fff5aa2a-7859-43b5-9a93-a567567a9270\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.118230 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" event={"ID":"3296910c-dd32-4d25-b939-00b76a3fe0a5","Type":"ContainerStarted","Data":"c7506815cf0eb2c1b482a8d974df2aae34e8e6f2bd5c5ccddc80d8122a62a9d7"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.118269 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" event={"ID":"3296910c-dd32-4d25-b939-00b76a3fe0a5","Type":"ContainerStarted","Data":"c23778d01c7ac133bf92cee8a930a9d17a64e77a10a9ba00da021475567c0267"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.118708 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.119837 5129 patch_prober.go:28] interesting pod/console-operator-67c89758df-5lsw5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.119876 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" podUID="3296910c-dd32-4d25-b939-00b76a3fe0a5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.120684 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-nmqql" event={"ID":"ed3c1960-3512-45f3-ba99-e79179060051","Type":"ContainerStarted","Data":"e4b9c436d4c031d40fa081cd1198261585db241103f8f33dbd5a90100a5c5630"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.120710 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-nmqql" event={"ID":"ed3c1960-3512-45f3-ba99-e79179060051","Type":"ContainerStarted","Data":"3d87c60a4137791d8094a80117c4542f57009aff29a7d90990a42d45c57a9a67"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.121388 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-nmqql" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.125698 5129 patch_prober.go:28] interesting pod/downloads-747b44746d-nmqql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.125753 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-nmqql" podUID="ed3c1960-3512-45f3-ba99-e79179060051" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.129250 5129 generic.go:358] "Generic (PLEG): container finished" podID="995ea4e8-ce15-451c-b499-8fb323605af8" containerID="09b8e25edee524188e7173d6f05e52d7e573fd15b2da2b0fb55a3de28bd5a439" exitCode=0 Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.129651 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m72v6\" (UniqueName: \"kubernetes.io/projected/a4a1ad2a-71de-426e-a205-d2cf008a150b-kube-api-access-m72v6\") pod \"dns-operator-799b87ffcd-kjgpd\" (UID: \"a4a1ad2a-71de-426e-a205-d2cf008a150b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.129712 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" event={"ID":"995ea4e8-ce15-451c-b499-8fb323605af8","Type":"ContainerDied","Data":"09b8e25edee524188e7173d6f05e52d7e573fd15b2da2b0fb55a3de28bd5a439"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.129750 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" event={"ID":"995ea4e8-ce15-451c-b499-8fb323605af8","Type":"ContainerStarted","Data":"e3af3f5b5422eb5db90c99866ce9d03951e37d96debaa2f33300e7cbb2be39f4"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.132644 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.143116 5129 generic.go:358] "Generic (PLEG): container finished" podID="46c1607c-6eed-4090-b499-6751db7a0e69" containerID="ddfd758ad25caaaca96437f0059e102bde3ff73143d228f292804e80fe9b28f9" exitCode=0 Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.143308 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" event={"ID":"46c1607c-6eed-4090-b499-6751db7a0e69","Type":"ContainerDied","Data":"ddfd758ad25caaaca96437f0059e102bde3ff73143d228f292804e80fe9b28f9"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.143351 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" event={"ID":"46c1607c-6eed-4090-b499-6751db7a0e69","Type":"ContainerStarted","Data":"16371a056557a0f67dd09b3301edf3ce6c7efac18a0df13ad6c02c24ade39099"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.147797 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" event={"ID":"2a34c1a2-a365-45a2-85bf-6946a16dcb01","Type":"ContainerStarted","Data":"bce9ac9976de455ec05f88c0ae1f761633d16a56f455c7fc077a26498dad95e2"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.147838 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" event={"ID":"2a34c1a2-a365-45a2-85bf-6946a16dcb01","Type":"ContainerStarted","Data":"fd9d1d88fe6faa4d988fb75144d6a3046b3df15c340ba21de1152c105c74c2b6"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.147847 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" event={"ID":"2a34c1a2-a365-45a2-85bf-6946a16dcb01","Type":"ContainerStarted","Data":"2007731c11f4e982597ec1dcc490c570e3bdf0972a7ca84b9c8817f3267be5dd"} Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.148223 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-72mtq\" (UniqueName: \"kubernetes.io/projected/da272c91-3742-497e-b116-40d44d676527-kube-api-access-72mtq\") pod \"controller-manager-65b6cccf98-fmrcf\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.177321 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.187235 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhll\" (UniqueName: \"kubernetes.io/projected/4bc8ddee-33ed-4b56-a439-7ba8e704624b-kube-api-access-rnhll\") pod \"apiserver-8596bd845d-xhkdc\" (UID: \"4bc8ddee-33ed-4b56-a439-7ba8e704624b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.191699 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.211505 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mq4g\" (UniqueName: \"kubernetes.io/projected/6d2ac0a1-c1ee-481f-ba8a-498974954c9b-kube-api-access-2mq4g\") pod \"ingress-operator-6b9cb4dbcf-5pj2z\" (UID: \"6d2ac0a1-c1ee-481f-ba8a-498974954c9b\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.220253 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.231350 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.251013 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.260049 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.267489 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.271043 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.297636 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298014 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-ca-trust-extracted\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298596 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvwr\" (UniqueName: \"kubernetes.io/projected/54f92bb1-cf65-4842-a52e-72685ca2be23-kube-api-access-6qvwr\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298639 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/405a34ec-9d37-40b4-842b-7a5e0cc8342b-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298664 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f368f463-ed5c-4b90-bb12-82794199158b-config\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298685 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f368f463-ed5c-4b90-bb12-82794199158b-kube-api-access\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298738 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-installation-pull-secrets\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298763 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qnk7\" (UniqueName: \"kubernetes.io/projected/37b297ba-c3c6-4b59-891a-1648996d8fd9-kube-api-access-6qnk7\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.298826 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/710e0795-32cf-4429-96de-01508f08690d-machine-approver-tls\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.299871 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-tls\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.299907 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b297ba-c3c6-4b59-891a-1648996d8fd9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.300695 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b297ba-c3c6-4b59-891a-1648996d8fd9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.300723 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54f92bb1-cf65-4842-a52e-72685ca2be23-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.300797 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jpn\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-kube-api-access-s5jpn\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.300906 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-bound-sa-token\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.300929 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/710e0795-32cf-4429-96de-01508f08690d-auth-proxy-config\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.300949 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f92bb1-cf65-4842-a52e-72685ca2be23-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.300983 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjq7\" (UniqueName: \"kubernetes.io/projected/652b8717-fc5d-4c51-bcb9-286947184f64-kube-api-access-7cjq7\") pod \"migrator-866fcbc849-2rsfc\" (UID: \"652b8717-fc5d-4c51-bcb9-286947184f64\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301039 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-certificates\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301058 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37b297ba-c3c6-4b59-891a-1648996d8fd9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301077 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37b297ba-c3c6-4b59-891a-1648996d8fd9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301175 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301223 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbxkp\" (UniqueName: \"kubernetes.io/projected/710e0795-32cf-4429-96de-01508f08690d-kube-api-access-xbxkp\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301251 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/405a34ec-9d37-40b4-842b-7a5e0cc8342b-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301285 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/405a34ec-9d37-40b4-842b-7a5e0cc8342b-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301309 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/37b297ba-c3c6-4b59-891a-1648996d8fd9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301337 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e0795-32cf-4429-96de-01508f08690d-config\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301357 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f368f463-ed5c-4b90-bb12-82794199158b-serving-cert\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301410 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-trusted-ca\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301458 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f92bb1-cf65-4842-a52e-72685ca2be23-config\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301748 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f368f463-ed5c-4b90-bb12-82794199158b-tmp-dir\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.301895 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405a34ec-9d37-40b4-842b-7a5e0cc8342b-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.310740 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:08.810723822 +0000 UTC m=+112.614253839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.312337 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.334600 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404014 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404395 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/102d30b5-bb95-4477-a08f-93f2ba3259f9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404446 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b297ba-c3c6-4b59-891a-1648996d8fd9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404472 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-tmpfs\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404490 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-apiservice-cert\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404543 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2fb633ee-b572-429f-bbb7-da362cc9f946-signing-key\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404566 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-service-ca\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404588 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llf24\" (UniqueName: \"kubernetes.io/projected/979c1fa1-11b6-41df-9502-2384433fe142-kube-api-access-llf24\") pod \"ingress-canary-92n27\" (UID: \"979c1fa1-11b6-41df-9502-2384433fe142\") " pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404619 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-srv-cert\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404640 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799084e4-c663-46fd-b6d2-ce5de36e3bc6-service-ca-bundle\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404663 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b699a9a-5674-4fb3-a4af-aad938990365-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404680 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/102d30b5-bb95-4477-a08f-93f2ba3259f9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404732 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/710e0795-32cf-4429-96de-01508f08690d-auth-proxy-config\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404757 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f92bb1-cf65-4842-a52e-72685ca2be23-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404777 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x8s8\" (UniqueName: \"kubernetes.io/projected/06554c04-9d86-4813-b92c-669a3ae5a776-kube-api-access-4x8s8\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404797 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-tmp-dir\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404839 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-images\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404863 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2bf\" (UniqueName: \"kubernetes.io/projected/0c5fd5f0-66e1-44f4-bfb0-093595462a64-kube-api-access-dz2bf\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404905 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02b278cf-b87e-4f64-9619-748b8a89619d-config-volume\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404963 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-certificates\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.404993 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xbxkp\" (UniqueName: \"kubernetes.io/projected/710e0795-32cf-4429-96de-01508f08690d-kube-api-access-xbxkp\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405017 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6ct\" (UniqueName: \"kubernetes.io/projected/230d02ac-28f0-4758-91cc-577a7c62dece-kube-api-access-lv6ct\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405068 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/048e4610-b9c6-4243-8a33-8c6156e3f025-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405091 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b86b2c85-dc72-4796-b7be-553b02ee6b3c-webhook-certs\") pod \"multus-admission-controller-69db94689b-hxxwl\" (UID: \"b86b2c85-dc72-4796-b7be-553b02ee6b3c\") " pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405133 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e0795-32cf-4429-96de-01508f08690d-config\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405173 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-ca\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405223 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-trusted-ca\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405263 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d64c96-60a7-477c-b039-e4201dd39ea7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zkktb\" (UID: \"f7d64c96-60a7-477c-b039-e4201dd39ea7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405300 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-tmpfs\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405321 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405364 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405a34ec-9d37-40b4-842b-7a5e0cc8342b-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405492 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55jr\" (UniqueName: \"kubernetes.io/projected/5138f051-1943-428e-a338-8a01376e467f-kube-api-access-s55jr\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405558 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f368f463-ed5c-4b90-bb12-82794199158b-tmp-dir\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405585 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/979c1fa1-11b6-41df-9502-2384433fe142-cert\") pod \"ingress-canary-92n27\" (UID: \"979c1fa1-11b6-41df-9502-2384433fe142\") " pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405603 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02b278cf-b87e-4f64-9619-748b8a89619d-secret-volume\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405623 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5fd5f0-66e1-44f4-bfb0-093595462a64-config\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405675 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qvwr\" (UniqueName: \"kubernetes.io/projected/54f92bb1-cf65-4842-a52e-72685ca2be23-kube-api-access-6qvwr\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405695 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/37b297ba-c3c6-4b59-891a-1648996d8fd9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405714 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f368f463-ed5c-4b90-bb12-82794199158b-serving-cert\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405747 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/405a34ec-9d37-40b4-842b-7a5e0cc8342b-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405780 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-config\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405795 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c5fd5f0-66e1-44f4-bfb0-093595462a64-serving-cert\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405831 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxbj\" (UniqueName: \"kubernetes.io/projected/9defd256-bb45-4406-9160-111816ac3c7c-kube-api-access-gzxbj\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405849 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-socket-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405865 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-webhook-cert\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405881 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4dc\" (UniqueName: \"kubernetes.io/projected/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-kube-api-access-pp4dc\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405917 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qnk7\" (UniqueName: \"kubernetes.io/projected/37b297ba-c3c6-4b59-891a-1648996d8fd9-kube-api-access-6qnk7\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.405938 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b699a9a-5674-4fb3-a4af-aad938990365-config\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.406018 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:08.906000386 +0000 UTC m=+112.709530403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406541 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-registration-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406600 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06554c04-9d86-4813-b92c-669a3ae5a776-tmp\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406650 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-tls\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406674 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b297ba-c3c6-4b59-891a-1648996d8fd9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406699 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54f92bb1-cf65-4842-a52e-72685ca2be23-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406717 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9defd256-bb45-4406-9160-111816ac3c7c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406735 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxvh\" (UniqueName: \"kubernetes.io/projected/8b9ab221-0b40-4bd9-adcb-550aac9fd590-kube-api-access-pfxvh\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nwd4t\" (UID: \"8b9ab221-0b40-4bd9-adcb-550aac9fd590\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406751 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406768 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-mountpoint-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406802 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jpn\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-kube-api-access-s5jpn\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406822 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff29n\" (UniqueName: \"kubernetes.io/projected/102d30b5-bb95-4477-a08f-93f2ba3259f9-kube-api-access-ff29n\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406864 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/230d02ac-28f0-4758-91cc-577a7c62dece-serving-cert\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406879 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406896 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-config-volume\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406911 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-metrics-tls\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406944 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7cjq7\" (UniqueName: \"kubernetes.io/projected/652b8717-fc5d-4c51-bcb9-286947184f64-kube-api-access-7cjq7\") pod \"migrator-866fcbc849-2rsfc\" (UID: \"652b8717-fc5d-4c51-bcb9-286947184f64\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406972 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-bound-sa-token\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.406989 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqtqs\" (UniqueName: \"kubernetes.io/projected/f7d64c96-60a7-477c-b039-e4201dd39ea7-kube-api-access-vqtqs\") pod \"package-server-manager-77f986bd66-zkktb\" (UID: \"f7d64c96-60a7-477c-b039-e4201dd39ea7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407007 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqx72\" (UniqueName: \"kubernetes.io/projected/02b278cf-b87e-4f64-9619-748b8a89619d-kube-api-access-qqx72\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407026 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw5q4\" (UniqueName: \"kubernetes.io/projected/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-kube-api-access-mw5q4\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407043 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/048e4610-b9c6-4243-8a33-8c6156e3f025-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407065 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgjgm\" (UniqueName: \"kubernetes.io/projected/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-kube-api-access-fgjgm\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407086 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/230d02ac-28f0-4758-91cc-577a7c62dece-tmp-dir\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407103 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37b297ba-c3c6-4b59-891a-1648996d8fd9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407121 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/405a34ec-9d37-40b4-842b-7a5e0cc8342b-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407138 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnz5\" (UniqueName: \"kubernetes.io/projected/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-kube-api-access-9wnz5\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407153 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6c2d0b64-7640-4cb0-928e-40c762a5b583-node-bootstrap-token\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407206 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/048e4610-b9c6-4243-8a33-8c6156e3f025-ready\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407239 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407258 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntck\" (UniqueName: \"kubernetes.io/projected/799084e4-c663-46fd-b6d2-ce5de36e3bc6-kube-api-access-bntck\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407294 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407328 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b699a9a-5674-4fb3-a4af-aad938990365-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407366 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bp4\" (UniqueName: \"kubernetes.io/projected/2fb633ee-b572-429f-bbb7-da362cc9f946-kube-api-access-f6bp4\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407415 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b9ab221-0b40-4bd9-adcb-550aac9fd590-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nwd4t\" (UID: \"8b9ab221-0b40-4bd9-adcb-550aac9fd590\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407443 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwqn\" (UniqueName: \"kubernetes.io/projected/048e4610-b9c6-4243-8a33-8c6156e3f025-kube-api-access-xmwqn\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.407489 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-stats-auth\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.408847 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b297ba-c3c6-4b59-891a-1648996d8fd9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.410319 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/710e0795-32cf-4429-96de-01508f08690d-auth-proxy-config\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.410333 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405a34ec-9d37-40b4-842b-7a5e0cc8342b-config\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.413268 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-certificates\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.413326 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc"] Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.414601 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710e0795-32cf-4429-96de-01508f08690d-config\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.416384 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5138f051-1943-428e-a338-8a01376e467f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.418188 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-trusted-ca\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.418941 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f92bb1-cf65-4842-a52e-72685ca2be23-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.419365 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/37b297ba-c3c6-4b59-891a-1648996d8fd9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.419703 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54f92bb1-cf65-4842-a52e-72685ca2be23-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.420112 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37b297ba-c3c6-4b59-891a-1648996d8fd9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.421809 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f368f463-ed5c-4b90-bb12-82794199158b-tmp-dir\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.422682 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:08.922650672 +0000 UTC m=+112.726180689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.422899 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9defd256-bb45-4406-9160-111816ac3c7c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.422927 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.422994 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-default-certificate\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423014 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5138f051-1943-428e-a338-8a01376e467f-tmpfs\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423054 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f92bb1-cf65-4842-a52e-72685ca2be23-config\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423079 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-client\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423126 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6c2d0b64-7640-4cb0-928e-40c762a5b583-certs\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423151 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5138f051-1943-428e-a338-8a01376e467f-srv-cert\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423174 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h9zb\" (UniqueName: \"kubernetes.io/projected/6c2d0b64-7640-4cb0-928e-40c762a5b583-kube-api-access-4h9zb\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423241 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-ca-trust-extracted\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423264 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f368f463-ed5c-4b90-bb12-82794199158b-config\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423291 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f368f463-ed5c-4b90-bb12-82794199158b-kube-api-access\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423319 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmg9p\" (UniqueName: \"kubernetes.io/projected/891601c4-e560-443f-a221-52b6fdc85cd3-kube-api-access-bmg9p\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.423369 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-metrics-certs\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424082 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5b699a9a-5674-4fb3-a4af-aad938990365-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424130 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-plugins-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424760 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f92bb1-cf65-4842-a52e-72685ca2be23-config\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424854 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-csi-data-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424883 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-installation-pull-secrets\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424908 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/710e0795-32cf-4429-96de-01508f08690d-machine-approver-tls\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424933 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2fb633ee-b572-429f-bbb7-da362cc9f946-signing-cabundle\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.424977 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9fm\" (UniqueName: \"kubernetes.io/projected/b86b2c85-dc72-4796-b7be-553b02ee6b3c-kube-api-access-mk9fm\") pod \"multus-admission-controller-69db94689b-hxxwl\" (UID: \"b86b2c85-dc72-4796-b7be-553b02ee6b3c\") " pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.425003 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37b297ba-c3c6-4b59-891a-1648996d8fd9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.425025 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/405a34ec-9d37-40b4-842b-7a5e0cc8342b-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.425058 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-tls\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.425183 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f368f463-ed5c-4b90-bb12-82794199158b-config\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.425403 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/405a34ec-9d37-40b4-842b-7a5e0cc8342b-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.428294 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b297ba-c3c6-4b59-891a-1648996d8fd9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.428460 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/405a34ec-9d37-40b4-842b-7a5e0cc8342b-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.432502 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-installation-pull-secrets\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.432859 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/710e0795-32cf-4429-96de-01508f08690d-machine-approver-tls\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.433334 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f368f463-ed5c-4b90-bb12-82794199158b-serving-cert\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.439927 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-ca-trust-extracted\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.466246 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qnk7\" (UniqueName: \"kubernetes.io/projected/37b297ba-c3c6-4b59-891a-1648996d8fd9-kube-api-access-6qnk7\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.470194 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cjq7\" (UniqueName: \"kubernetes.io/projected/652b8717-fc5d-4c51-bcb9-286947184f64-kube-api-access-7cjq7\") pod \"migrator-866fcbc849-2rsfc\" (UID: \"652b8717-fc5d-4c51-bcb9-286947184f64\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.490228 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-bound-sa-token\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526177 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526328 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnz5\" (UniqueName: \"kubernetes.io/projected/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-kube-api-access-9wnz5\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526354 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6c2d0b64-7640-4cb0-928e-40c762a5b583-node-bootstrap-token\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526387 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/048e4610-b9c6-4243-8a33-8c6156e3f025-ready\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526416 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bntck\" (UniqueName: \"kubernetes.io/projected/799084e4-c663-46fd-b6d2-ce5de36e3bc6-kube-api-access-bntck\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526439 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526464 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b699a9a-5674-4fb3-a4af-aad938990365-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526490 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6bp4\" (UniqueName: \"kubernetes.io/projected/2fb633ee-b572-429f-bbb7-da362cc9f946-kube-api-access-f6bp4\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526532 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b9ab221-0b40-4bd9-adcb-550aac9fd590-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nwd4t\" (UID: \"8b9ab221-0b40-4bd9-adcb-550aac9fd590\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526556 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwqn\" (UniqueName: \"kubernetes.io/projected/048e4610-b9c6-4243-8a33-8c6156e3f025-kube-api-access-xmwqn\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526579 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-stats-auth\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526601 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5138f051-1943-428e-a338-8a01376e467f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526624 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9defd256-bb45-4406-9160-111816ac3c7c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526647 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526673 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-default-certificate\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526692 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5138f051-1943-428e-a338-8a01376e467f-tmpfs\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526715 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-client\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526748 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6c2d0b64-7640-4cb0-928e-40c762a5b583-certs\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526768 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5138f051-1943-428e-a338-8a01376e467f-srv-cert\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526805 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4h9zb\" (UniqueName: \"kubernetes.io/projected/6c2d0b64-7640-4cb0-928e-40c762a5b583-kube-api-access-4h9zb\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526836 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmg9p\" (UniqueName: \"kubernetes.io/projected/891601c4-e560-443f-a221-52b6fdc85cd3-kube-api-access-bmg9p\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526865 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-metrics-certs\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526883 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5b699a9a-5674-4fb3-a4af-aad938990365-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526903 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-plugins-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526926 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-csi-data-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526949 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2fb633ee-b572-429f-bbb7-da362cc9f946-signing-cabundle\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526970 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mk9fm\" (UniqueName: \"kubernetes.io/projected/b86b2c85-dc72-4796-b7be-553b02ee6b3c-kube-api-access-mk9fm\") pod \"multus-admission-controller-69db94689b-hxxwl\" (UID: \"b86b2c85-dc72-4796-b7be-553b02ee6b3c\") " pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.526997 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/102d30b5-bb95-4477-a08f-93f2ba3259f9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527026 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-tmpfs\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527044 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-apiservice-cert\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527067 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2fb633ee-b572-429f-bbb7-da362cc9f946-signing-key\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527087 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-service-ca\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527111 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llf24\" (UniqueName: \"kubernetes.io/projected/979c1fa1-11b6-41df-9502-2384433fe142-kube-api-access-llf24\") pod \"ingress-canary-92n27\" (UID: \"979c1fa1-11b6-41df-9502-2384433fe142\") " pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527136 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-srv-cert\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527158 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799084e4-c663-46fd-b6d2-ce5de36e3bc6-service-ca-bundle\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527179 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b699a9a-5674-4fb3-a4af-aad938990365-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527200 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/102d30b5-bb95-4477-a08f-93f2ba3259f9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527233 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4x8s8\" (UniqueName: \"kubernetes.io/projected/06554c04-9d86-4813-b92c-669a3ae5a776-kube-api-access-4x8s8\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527258 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-tmp-dir\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527282 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-images\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527303 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dz2bf\" (UniqueName: \"kubernetes.io/projected/0c5fd5f0-66e1-44f4-bfb0-093595462a64-kube-api-access-dz2bf\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527350 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02b278cf-b87e-4f64-9619-748b8a89619d-config-volume\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527381 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lv6ct\" (UniqueName: \"kubernetes.io/projected/230d02ac-28f0-4758-91cc-577a7c62dece-kube-api-access-lv6ct\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527413 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/048e4610-b9c6-4243-8a33-8c6156e3f025-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527447 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b86b2c85-dc72-4796-b7be-553b02ee6b3c-webhook-certs\") pod \"multus-admission-controller-69db94689b-hxxwl\" (UID: \"b86b2c85-dc72-4796-b7be-553b02ee6b3c\") " pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527480 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-ca\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527527 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d64c96-60a7-477c-b039-e4201dd39ea7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zkktb\" (UID: \"f7d64c96-60a7-477c-b039-e4201dd39ea7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527560 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-tmpfs\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527583 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527623 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s55jr\" (UniqueName: \"kubernetes.io/projected/5138f051-1943-428e-a338-8a01376e467f-kube-api-access-s55jr\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527647 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/979c1fa1-11b6-41df-9502-2384433fe142-cert\") pod \"ingress-canary-92n27\" (UID: \"979c1fa1-11b6-41df-9502-2384433fe142\") " pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527668 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02b278cf-b87e-4f64-9619-748b8a89619d-secret-volume\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527693 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5fd5f0-66e1-44f4-bfb0-093595462a64-config\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527736 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-config\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527757 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c5fd5f0-66e1-44f4-bfb0-093595462a64-serving-cert\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527789 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzxbj\" (UniqueName: \"kubernetes.io/projected/9defd256-bb45-4406-9160-111816ac3c7c-kube-api-access-gzxbj\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527811 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-socket-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527834 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-webhook-cert\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527856 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4dc\" (UniqueName: \"kubernetes.io/projected/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-kube-api-access-pp4dc\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527889 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b699a9a-5674-4fb3-a4af-aad938990365-config\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527912 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-registration-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527946 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06554c04-9d86-4813-b92c-669a3ae5a776-tmp\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.527978 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9defd256-bb45-4406-9160-111816ac3c7c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528001 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxvh\" (UniqueName: \"kubernetes.io/projected/8b9ab221-0b40-4bd9-adcb-550aac9fd590-kube-api-access-pfxvh\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nwd4t\" (UID: \"8b9ab221-0b40-4bd9-adcb-550aac9fd590\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528025 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528067 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-mountpoint-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528097 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ff29n\" (UniqueName: \"kubernetes.io/projected/102d30b5-bb95-4477-a08f-93f2ba3259f9-kube-api-access-ff29n\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528127 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/230d02ac-28f0-4758-91cc-577a7c62dece-serving-cert\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528151 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528175 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-config-volume\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528196 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-metrics-tls\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528246 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vqtqs\" (UniqueName: \"kubernetes.io/projected/f7d64c96-60a7-477c-b039-e4201dd39ea7-kube-api-access-vqtqs\") pod \"package-server-manager-77f986bd66-zkktb\" (UID: \"f7d64c96-60a7-477c-b039-e4201dd39ea7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528269 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qqx72\" (UniqueName: \"kubernetes.io/projected/02b278cf-b87e-4f64-9619-748b8a89619d-kube-api-access-qqx72\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528316 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mw5q4\" (UniqueName: \"kubernetes.io/projected/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-kube-api-access-mw5q4\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528341 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/048e4610-b9c6-4243-8a33-8c6156e3f025-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528364 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fgjgm\" (UniqueName: \"kubernetes.io/projected/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-kube-api-access-fgjgm\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528391 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/230d02ac-28f0-4758-91cc-577a7c62dece-tmp-dir\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.528836 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/230d02ac-28f0-4758-91cc-577a7c62dece-tmp-dir\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.529068 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.029040891 +0000 UTC m=+112.832570928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.529135 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-tmp-dir\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.530879 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/048e4610-b9c6-4243-8a33-8c6156e3f025-ready\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.532663 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-tmpfs\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.532994 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6c2d0b64-7640-4cb0-928e-40c762a5b583-node-bootstrap-token\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.534137 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-images\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.534268 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-csi-data-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.534657 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5b699a9a-5674-4fb3-a4af-aad938990365-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.534899 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-plugins-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.534932 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02b278cf-b87e-4f64-9619-748b8a89619d-config-volume\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.535840 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/048e4610-b9c6-4243-8a33-8c6156e3f025-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.536373 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5138f051-1943-428e-a338-8a01376e467f-tmpfs\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.536505 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2fb633ee-b572-429f-bbb7-da362cc9f946-signing-cabundle\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.537247 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-service-ca\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.537826 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbxkp\" (UniqueName: \"kubernetes.io/projected/710e0795-32cf-4429-96de-01508f08690d-kube-api-access-xbxkp\") pod \"machine-approver-54c688565-rvbh6\" (UID: \"710e0795-32cf-4429-96de-01508f08690d\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.537913 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-registration-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.538205 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06554c04-9d86-4813-b92c-669a3ae5a776-tmp\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.538879 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9defd256-bb45-4406-9160-111816ac3c7c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.539393 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/102d30b5-bb95-4477-a08f-93f2ba3259f9-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.539424 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.539477 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-mountpoint-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.541205 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.541792 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-config-volume\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.545037 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-metrics-tls\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.548033 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-default-certificate\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.548128 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/048e4610-b9c6-4243-8a33-8c6156e3f025-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.548424 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b699a9a-5674-4fb3-a4af-aad938990365-config\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.548589 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-webhook-cert\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.550957 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/799084e4-c663-46fd-b6d2-ce5de36e3bc6-service-ca-bundle\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.551244 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-tmpfs\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.551377 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/979c1fa1-11b6-41df-9502-2384433fe142-cert\") pod \"ingress-canary-92n27\" (UID: \"979c1fa1-11b6-41df-9502-2384433fe142\") " pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.552601 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-ca\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.555870 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.556138 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/230d02ac-28f0-4758-91cc-577a7c62dece-serving-cert\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.556541 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/891601c4-e560-443f-a221-52b6fdc85cd3-socket-dir\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.556727 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/230d02ac-28f0-4758-91cc-577a7c62dece-config\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.557409 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5fd5f0-66e1-44f4-bfb0-093595462a64-config\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.557670 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.558027 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-apiservice-cert\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.558278 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9defd256-bb45-4406-9160-111816ac3c7c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.558727 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/230d02ac-28f0-4758-91cc-577a7c62dece-etcd-client\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.558942 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-srv-cert\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.559571 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.563151 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b699a9a-5674-4fb3-a4af-aad938990365-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.563227 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2fb633ee-b572-429f-bbb7-da362cc9f946-signing-key\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.563705 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-metrics-certs\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.563729 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c5fd5f0-66e1-44f4-bfb0-093595462a64-serving-cert\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.564006 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qvwr\" (UniqueName: \"kubernetes.io/projected/54f92bb1-cf65-4842-a52e-72685ca2be23-kube-api-access-6qvwr\") pod \"openshift-controller-manager-operator-686468bdd5-qbzb7\" (UID: \"54f92bb1-cf65-4842-a52e-72685ca2be23\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.564022 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5138f051-1943-428e-a338-8a01376e467f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.564160 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d64c96-60a7-477c-b039-e4201dd39ea7-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zkktb\" (UID: \"f7d64c96-60a7-477c-b039-e4201dd39ea7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.564300 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6c2d0b64-7640-4cb0-928e-40c762a5b583-certs\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.564645 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jpn\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-kube-api-access-s5jpn\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.565016 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/799084e4-c663-46fd-b6d2-ce5de36e3bc6-stats-auth\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.565814 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/102d30b5-bb95-4477-a08f-93f2ba3259f9-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.566164 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b9ab221-0b40-4bd9-adcb-550aac9fd590-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nwd4t\" (UID: \"8b9ab221-0b40-4bd9-adcb-550aac9fd590\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.573470 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b86b2c85-dc72-4796-b7be-553b02ee6b3c-webhook-certs\") pod \"multus-admission-controller-69db94689b-hxxwl\" (UID: \"b86b2c85-dc72-4796-b7be-553b02ee6b3c\") " pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.575279 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz"] Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.579038 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02b278cf-b87e-4f64-9619-748b8a89619d-secret-volume\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.581676 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.585727 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/405a34ec-9d37-40b4-842b-7a5e0cc8342b-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-dkmb9\" (UID: \"405a34ec-9d37-40b4-842b-7a5e0cc8342b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.592367 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.604034 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f368f463-ed5c-4b90-bb12-82794199158b-kube-api-access\") pod \"kube-apiserver-operator-575994946d-l44p8\" (UID: \"f368f463-ed5c-4b90-bb12-82794199158b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.605648 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5138f051-1943-428e-a338-8a01376e467f-srv-cert\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.631418 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.631685 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.131672613 +0000 UTC m=+112.935202630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.633869 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fmrcf"] Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.634358 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kjgpd"] Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.636096 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37b297ba-c3c6-4b59-891a-1648996d8fd9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-gzckr\" (UID: \"37b297ba-c3c6-4b59-891a-1648996d8fd9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.657316 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x8s8\" (UniqueName: \"kubernetes.io/projected/06554c04-9d86-4813-b92c-669a3ae5a776-kube-api-access-4x8s8\") pod \"marketplace-operator-547dbd544d-dkmmj\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.671102 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnz5\" (UniqueName: \"kubernetes.io/projected/d3472ec7-ae60-436a-a49d-12e78eaa0d6c-kube-api-access-9wnz5\") pod \"packageserver-7d4fc7d867-6r5mg\" (UID: \"d3472ec7-ae60-436a-a49d-12e78eaa0d6c\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.680882 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z"] Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.680886 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.687736 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.691166 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h9zb\" (UniqueName: \"kubernetes.io/projected/6c2d0b64-7640-4cb0-928e-40c762a5b583-kube-api-access-4h9zb\") pod \"machine-config-server-l89p6\" (UID: \"6c2d0b64-7640-4cb0-928e-40c762a5b583\") " pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: W1211 16:56:08.709212 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d2ac0a1_c1ee_481f_ba8a_498974954c9b.slice/crio-37d6c11803dcdf92685f57c95ccaecd542f7b636feb274b11ccf988a37d555dd WatchSource:0}: Error finding container 37d6c11803dcdf92685f57c95ccaecd542f7b636feb274b11ccf988a37d555dd: Status 404 returned error can't find the container with id 37d6c11803dcdf92685f57c95ccaecd542f7b636feb274b11ccf988a37d555dd Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.711583 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntck\" (UniqueName: \"kubernetes.io/projected/799084e4-c663-46fd-b6d2-ce5de36e3bc6-kube-api-access-bntck\") pod \"router-default-68cf44c8b8-xcrfz\" (UID: \"799084e4-c663-46fd-b6d2-ce5de36e3bc6\") " pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.728130 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmg9p\" (UniqueName: \"kubernetes.io/projected/891601c4-e560-443f-a221-52b6fdc85cd3-kube-api-access-bmg9p\") pod \"csi-hostpathplugin-xjqrz\" (UID: \"891601c4-e560-443f-a221-52b6fdc85cd3\") " pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.738485 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.738817 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.238802384 +0000 UTC m=+113.042332401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.764383 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz2bf\" (UniqueName: \"kubernetes.io/projected/0c5fd5f0-66e1-44f4-bfb0-093595462a64-kube-api-access-dz2bf\") pod \"service-ca-operator-5b9c976747-9q2rp\" (UID: \"0c5fd5f0-66e1-44f4-bfb0-093595462a64\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.764629 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.776503 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv6ct\" (UniqueName: \"kubernetes.io/projected/230d02ac-28f0-4758-91cc-577a7c62dece-kube-api-access-lv6ct\") pod \"etcd-operator-69b85846b6-d5lnt\" (UID: \"230d02ac-28f0-4758-91cc-577a7c62dece\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.801884 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.802313 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l89p6" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.808284 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.813494 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk9fm\" (UniqueName: \"kubernetes.io/projected/b86b2c85-dc72-4796-b7be-553b02ee6b3c-kube-api-access-mk9fm\") pod \"multus-admission-controller-69db94689b-hxxwl\" (UID: \"b86b2c85-dc72-4796-b7be-553b02ee6b3c\") " pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.817997 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4dc\" (UniqueName: \"kubernetes.io/projected/92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5-kube-api-access-pp4dc\") pod \"machine-config-operator-67c9d58cbb-499sg\" (UID: \"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.826824 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.835369 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.841286 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.841965 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.341949212 +0000 UTC m=+113.145479229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.860585 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxvh\" (UniqueName: \"kubernetes.io/projected/8b9ab221-0b40-4bd9-adcb-550aac9fd590-kube-api-access-pfxvh\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nwd4t\" (UID: \"8b9ab221-0b40-4bd9-adcb-550aac9fd590\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.861232 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff29n\" (UniqueName: \"kubernetes.io/projected/102d30b5-bb95-4477-a08f-93f2ba3259f9-kube-api-access-ff29n\") pod \"machine-config-controller-f9cdd68f7-w5mtv\" (UID: \"102d30b5-bb95-4477-a08f-93f2ba3259f9\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.870606 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llf24\" (UniqueName: \"kubernetes.io/projected/979c1fa1-11b6-41df-9502-2384433fe142-kube-api-access-llf24\") pod \"ingress-canary-92n27\" (UID: \"979c1fa1-11b6-41df-9502-2384433fe142\") " pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.906870 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.907443 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqtqs\" (UniqueName: \"kubernetes.io/projected/f7d64c96-60a7-477c-b039-e4201dd39ea7-kube-api-access-vqtqs\") pod \"package-server-manager-77f986bd66-zkktb\" (UID: \"f7d64c96-60a7-477c-b039-e4201dd39ea7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.908304 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9"] Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.914484 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqx72\" (UniqueName: \"kubernetes.io/projected/02b278cf-b87e-4f64-9619-748b8a89619d-kube-api-access-qqx72\") pod \"collect-profiles-29424525-7fxt9\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.921806 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.929357 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.931016 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw5q4\" (UniqueName: \"kubernetes.io/projected/982ae1d1-db2c-4b76-9450-77d8ca4f6e11-kube-api-access-mw5q4\") pod \"catalog-operator-75ff9f647d-86b4q\" (UID: \"982ae1d1-db2c-4b76-9450-77d8ca4f6e11\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.934618 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.943656 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:08 crc kubenswrapper[5129]: E1211 16:56:08.944026 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.444010507 +0000 UTC m=+113.247540524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.963855 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgjgm\" (UniqueName: \"kubernetes.io/projected/bc8134ed-b13c-4f8a-9821-aabcf7600ddb-kube-api-access-fgjgm\") pod \"dns-default-xrzh8\" (UID: \"bc8134ed-b13c-4f8a-9821-aabcf7600ddb\") " pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.965896 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.968958 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.979394 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55jr\" (UniqueName: \"kubernetes.io/projected/5138f051-1943-428e-a338-8a01376e467f-kube-api-access-s55jr\") pod \"olm-operator-5cdf44d969-dg7mm\" (UID: \"5138f051-1943-428e-a338-8a01376e467f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:08 crc kubenswrapper[5129]: I1211 16:56:08.988226 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc"] Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:08.998348 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.024868 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.025822 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.026108 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.033616 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.034312 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6bp4\" (UniqueName: \"kubernetes.io/projected/2fb633ee-b572-429f-bbb7-da362cc9f946-kube-api-access-f6bp4\") pod \"service-ca-74545575db-964vr\" (UID: \"2fb633ee-b572-429f-bbb7-da362cc9f946\") " pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.034659 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwqn\" (UniqueName: \"kubernetes.io/projected/048e4610-b9c6-4243-8a33-8c6156e3f025-kube-api-access-xmwqn\") pod \"cni-sysctl-allowlist-ds-98qvs\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.056720 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.057013 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.556996549 +0000 UTC m=+113.360526566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.070708 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.071889 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b699a9a-5674-4fb3-a4af-aad938990365-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-2zp55\" (UID: \"5b699a9a-5674-4fb3-a4af-aad938990365\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.075217 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzxbj\" (UniqueName: \"kubernetes.io/projected/9defd256-bb45-4406-9160-111816ac3c7c-kube-api-access-gzxbj\") pod \"kube-storage-version-migrator-operator-565b79b866-p9m8g\" (UID: \"9defd256-bb45-4406-9160-111816ac3c7c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.093498 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-92n27" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.094097 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-964vr" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.104763 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.138217 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dkmmj"] Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.177643 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.178675 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.678655902 +0000 UTC m=+113.482185919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: W1211 16:56:09.185622 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c2d0b64_7640_4cb0_928e_40c762a5b583.slice/crio-75c2a3cf07176802d5bd2fd96ef4c6c711cc6e213708e6af733dd4208fdde189 WatchSource:0}: Error finding container 75c2a3cf07176802d5bd2fd96ef4c6c711cc6e213708e6af733dd4208fdde189: Status 404 returned error can't find the container with id 75c2a3cf07176802d5bd2fd96ef4c6c711cc6e213708e6af733dd4208fdde189 Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.203552 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" event={"ID":"46c1607c-6eed-4090-b499-6751db7a0e69","Type":"ContainerStarted","Data":"12396a48f6561b2a2bcd4aff101d1f1c3f09c419b2e418a9c2e096191a655a9a"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.211844 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" event={"ID":"a4a1ad2a-71de-426e-a205-d2cf008a150b","Type":"ContainerStarted","Data":"736c40eefe978e05bb6577e9194a3734054e81fdb196ed6c9dc883cec9a4023b"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.213165 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" Dec 11 16:56:09 crc kubenswrapper[5129]: W1211 16:56:09.233408 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405a34ec_9d37_40b4_842b_7a5e0cc8342b.slice/crio-df18a9a170f5d89179478e2683cf5019ca2fd51f6a4485613bd9c2d32afcefc5 WatchSource:0}: Error finding container df18a9a170f5d89179478e2683cf5019ca2fd51f6a4485613bd9c2d32afcefc5: Status 404 returned error can't find the container with id df18a9a170f5d89179478e2683cf5019ca2fd51f6a4485613bd9c2d32afcefc5 Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.250251 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.251118 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" event={"ID":"da272c91-3742-497e-b116-40d44d676527","Type":"ContainerStarted","Data":"a99a74c8ca8580cdde61365c5554058f594791e353ac6109e1af3bdcbf3b9ec3"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.251167 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" event={"ID":"da272c91-3742-497e-b116-40d44d676527","Type":"ContainerStarted","Data":"385eda6023b7bb9a81b8279c1d41615e257359d69e0ba0885b24119f286f30cc"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.251350 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.257443 5129 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-fmrcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.257494 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" podUID="da272c91-3742-497e-b116-40d44d676527" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.269440 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" event={"ID":"fff5aa2a-7859-43b5-9a93-a567567a9270","Type":"ContainerStarted","Data":"eac14f907e893cb747b51d4f9a1056b87f66c5b5c57d70c76296225504aa694f"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.269505 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" event={"ID":"fff5aa2a-7859-43b5-9a93-a567567a9270","Type":"ContainerStarted","Data":"c665ed4e5a11884b341a7e0881e128909fea8fea253e6d2a5a507b639f558e24"} Dec 11 16:56:09 crc kubenswrapper[5129]: W1211 16:56:09.279256 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652b8717_fc5d_4c51_bcb9_286947184f64.slice/crio-c64b1900863ab347af621bab205276e029fd586da5d89c18ce9e1d49d2e62ce7 WatchSource:0}: Error finding container c64b1900863ab347af621bab205276e029fd586da5d89c18ce9e1d49d2e62ce7: Status 404 returned error can't find the container with id c64b1900863ab347af621bab205276e029fd586da5d89c18ce9e1d49d2e62ce7 Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.280285 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.282051 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.782029317 +0000 UTC m=+113.585559334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.331931 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" event={"ID":"d7ed60a5-b258-460e-9fc3-2461aaa4cf12","Type":"ContainerStarted","Data":"209951ba5584d9a3f6184768e4cbbaa2e3ca164bbba199a58deca90582f453a0"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.359298 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" event={"ID":"4bc8ddee-33ed-4b56-a439-7ba8e704624b","Type":"ContainerStarted","Data":"502589f0c1ab4470ff9617c3a31308c1791e77fc1603eb363d2bc12cfac7cb88"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.394127 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.394963 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.894942537 +0000 UTC m=+113.698472554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.395125 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.396618 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.896606519 +0000 UTC m=+113.700136546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.430713 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" event={"ID":"6d2ac0a1-c1ee-481f-ba8a-498974954c9b","Type":"ContainerStarted","Data":"37d6c11803dcdf92685f57c95ccaecd542f7b636feb274b11ccf988a37d555dd"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.443028 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" event={"ID":"995ea4e8-ce15-451c-b499-8fb323605af8","Type":"ContainerStarted","Data":"9426e4d2b1ccec7e1281a994e414caee0b60c7eccf749208248ac41ef3347fdd"} Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.443070 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.444346 5129 patch_prober.go:28] interesting pod/downloads-747b44746d-nmqql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.444387 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-nmqql" podUID="ed3c1960-3512-45f3-ba99-e79179060051" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.449282 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.454902 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg"] Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.454996 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.455031 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.457116 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr"] Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.496491 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.499858 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:09.999805928 +0000 UTC m=+113.803335955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.597202 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xjqrz"] Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.598777 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.602220 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.102154081 +0000 UTC m=+113.905684108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.624216 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7"] Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.701128 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-nmqql" podStartSLOduration=93.70110956 podStartE2EDuration="1m33.70110956s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:09.684537706 +0000 UTC m=+113.488067733" watchObservedRunningTime="2025-12-11 16:56:09.70110956 +0000 UTC m=+113.504639577" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.705153 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.705625 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.20561126 +0000 UTC m=+114.009141277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.745742 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-5lsw5" podStartSLOduration=93.745720513 podStartE2EDuration="1m33.745720513s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:09.704956429 +0000 UTC m=+113.508486476" watchObservedRunningTime="2025-12-11 16:56:09.745720513 +0000 UTC m=+113.549250530" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.806968 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.807377 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.307362444 +0000 UTC m=+114.110892451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: W1211 16:56:09.833814 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f92bb1_cf65_4842_a52e_72685ca2be23.slice/crio-69d0501431b63a3d57cb42df8d2a983f081dea188154ecba5bb06d3b0bea8e00 WatchSource:0}: Error finding container 69d0501431b63a3d57cb42df8d2a983f081dea188154ecba5bb06d3b0bea8e00: Status 404 returned error can't find the container with id 69d0501431b63a3d57cb42df8d2a983f081dea188154ecba5bb06d3b0bea8e00 Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.838214 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-cvmcg" podStartSLOduration=93.83819999 podStartE2EDuration="1m33.83819999s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:09.836992123 +0000 UTC m=+113.640522140" watchObservedRunningTime="2025-12-11 16:56:09.83819999 +0000 UTC m=+113.641730007" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.885755 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" podStartSLOduration=93.885738584 podStartE2EDuration="1m33.885738584s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:09.884109114 +0000 UTC m=+113.687639131" watchObservedRunningTime="2025-12-11 16:56:09.885738584 +0000 UTC m=+113.689268601" Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.909169 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:09 crc kubenswrapper[5129]: E1211 16:56:09.914906 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.414882468 +0000 UTC m=+114.218412485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:09 crc kubenswrapper[5129]: I1211 16:56:09.963425 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sz8rd" podStartSLOduration=93.963410172 podStartE2EDuration="1m33.963410172s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:09.907009864 +0000 UTC m=+113.710539891" watchObservedRunningTime="2025-12-11 16:56:09.963410172 +0000 UTC m=+113.766940179" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.016536 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.016848 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.516837138 +0000 UTC m=+114.320367155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.105961 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nlzgz" podStartSLOduration=94.105942312 podStartE2EDuration="1m34.105942312s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:10.070817842 +0000 UTC m=+113.874347859" watchObservedRunningTime="2025-12-11 16:56:10.105942312 +0000 UTC m=+113.909472329" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.119463 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.119581 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.619558404 +0000 UTC m=+114.423088421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.120036 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.120318 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.620311526 +0000 UTC m=+114.423841543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.131253 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8"] Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.167755 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-92n27"] Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.220595 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.220802 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.720787011 +0000 UTC m=+114.524317018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.221409 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.221916 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.721902866 +0000 UTC m=+114.525432873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.322152 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.322351 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.82233374 +0000 UTC m=+114.625863757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.322405 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.322720 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.822713232 +0000 UTC m=+114.626243249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.350235 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" podStartSLOduration=94.350213464 podStartE2EDuration="1m34.350213464s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:10.347890552 +0000 UTC m=+114.151420569" watchObservedRunningTime="2025-12-11 16:56:10.350213464 +0000 UTC m=+114.153743481" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.399960 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" podStartSLOduration=94.399946796 podStartE2EDuration="1m34.399946796s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:10.398320546 +0000 UTC m=+114.201850583" watchObservedRunningTime="2025-12-11 16:56:10.399946796 +0000 UTC m=+114.203476813" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.423040 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.423173 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.923155066 +0000 UTC m=+114.726685083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.423355 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.423424 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.423460 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.424446 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:10.924438176 +0000 UTC m=+114.727968193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.424734 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.432480 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.492288 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.524638 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.525310 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.525340 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.525376 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.525684 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.025667365 +0000 UTC m=+114.829197382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.548543 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.549880 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15d52990-0733-45fe-ac96-429a9503dbab-metrics-certs\") pod \"network-metrics-daemon-fptr2\" (UID: \"15d52990-0733-45fe-ac96-429a9503dbab\") " pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.577143 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:10 crc kubenswrapper[5129]: W1211 16:56:10.609301 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf368f463_ed5c_4b90_bb12_82794199158b.slice/crio-e42487fefab248c70769190f6313652d6fde87185f7e6508f3aaa000c2c98f95 WatchSource:0}: Error finding container e42487fefab248c70769190f6313652d6fde87185f7e6508f3aaa000c2c98f95: Status 404 returned error can't find the container with id e42487fefab248c70769190f6313652d6fde87185f7e6508f3aaa000c2c98f95 Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.627039 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.627346 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.127333276 +0000 UTC m=+114.930863293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.648616 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" event={"ID":"710e0795-32cf-4429-96de-01508f08690d","Type":"ContainerStarted","Data":"0e6fd2477c3ce4e9d764cfb3f547ae22b2a77815d5b1a617dbce050c3d16fdbc"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.658079 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l89p6" event={"ID":"6c2d0b64-7640-4cb0-928e-40c762a5b583","Type":"ContainerStarted","Data":"ee05b64921d52e13747790a20ab35cb27f65dab7abc65a6dadab365b4f5350ab"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.658139 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l89p6" event={"ID":"6c2d0b64-7640-4cb0-928e-40c762a5b583","Type":"ContainerStarted","Data":"75c2a3cf07176802d5bd2fd96ef4c6c711cc6e213708e6af733dd4208fdde189"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.679271 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" event={"ID":"54f92bb1-cf65-4842-a52e-72685ca2be23","Type":"ContainerStarted","Data":"69d0501431b63a3d57cb42df8d2a983f081dea188154ecba5bb06d3b0bea8e00"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.721997 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" event={"ID":"d3472ec7-ae60-436a-a49d-12e78eaa0d6c","Type":"ContainerStarted","Data":"c05536d203a367ce7a18c3af48ed7f0ef715777f5971190df39ecc6b1016edcf"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.727736 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.728886 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.228863214 +0000 UTC m=+115.032393231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.730263 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" event={"ID":"405a34ec-9d37-40b4-842b-7a5e0cc8342b","Type":"ContainerStarted","Data":"df18a9a170f5d89179478e2683cf5019ca2fd51f6a4485613bd9c2d32afcefc5"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.742822 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fptr2" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.760000 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.763992 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.778392 5129 generic.go:358] "Generic (PLEG): container finished" podID="4bc8ddee-33ed-4b56-a439-7ba8e704624b" containerID="cc9782a83e67d2d5f592963782fdfacbcf599404b34719bf1228d987b0634fb4" exitCode=0 Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.778502 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" event={"ID":"4bc8ddee-33ed-4b56-a439-7ba8e704624b","Type":"ContainerDied","Data":"cc9782a83e67d2d5f592963782fdfacbcf599404b34719bf1228d987b0634fb4"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.780167 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" event={"ID":"06554c04-9d86-4813-b92c-669a3ae5a776","Type":"ContainerStarted","Data":"afc5e3a637f02bdc1ddb8a26718db935635814bf3d38cd42798e78fe34b143d6"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.787766 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" event={"ID":"6d2ac0a1-c1ee-481f-ba8a-498974954c9b","Type":"ContainerStarted","Data":"7c99e619c3751f4f301952ab4ba741978116b81cfeb8245bcc592e9e6c8b3f83"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.789972 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-jhm42" podStartSLOduration=94.789956148 podStartE2EDuration="1m34.789956148s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:10.788737551 +0000 UTC m=+114.592267578" watchObservedRunningTime="2025-12-11 16:56:10.789956148 +0000 UTC m=+114.593486165" Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.808227 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" event={"ID":"37b297ba-c3c6-4b59-891a-1648996d8fd9","Type":"ContainerStarted","Data":"429969f44be65469c234f2a19207f33600e26a359f57458621e09ee7e7a8c995"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.809337 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9"] Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.813963 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" event={"ID":"891601c4-e560-443f-a221-52b6fdc85cd3","Type":"ContainerStarted","Data":"144ad83122c328d49808dfb66d1bf57d21a217bd2ecca9c6d5dd32542f93aa29"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.819756 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" event={"ID":"652b8717-fc5d-4c51-bcb9-286947184f64","Type":"ContainerStarted","Data":"c64b1900863ab347af621bab205276e029fd586da5d89c18ce9e1d49d2e62ce7"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.829309 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.830211 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.330198456 +0000 UTC m=+115.133728473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:10 crc kubenswrapper[5129]: W1211 16:56:10.843685 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b278cf_b87e_4f64_9619_748b8a89619d.slice/crio-3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8 WatchSource:0}: Error finding container 3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8: Status 404 returned error can't find the container with id 3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8 Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.872328 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" event={"ID":"46c1607c-6eed-4090-b499-6751db7a0e69","Type":"ContainerStarted","Data":"a2c69cdc3f9a1fb14e61c4041584ad49bdd9b4e42efce685d07db145248fc739"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.891044 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" event={"ID":"799084e4-c663-46fd-b6d2-ce5de36e3bc6","Type":"ContainerStarted","Data":"8cc8f74c8376361572e6cd6aacb975127bc2a02d3ab2bcfb5eb19183dab5eede"} Dec 11 16:56:10 crc kubenswrapper[5129]: I1211 16:56:10.931976 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:10 crc kubenswrapper[5129]: E1211 16:56:10.932919 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.43290125 +0000 UTC m=+115.236431347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.003682 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" event={"ID":"a4a1ad2a-71de-426e-a205-d2cf008a150b","Type":"ContainerStarted","Data":"5cb66fed6313f5228dfcbe56871e682c973c7a802568653282535390b2a1dc4e"} Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.037566 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.037931 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.537915067 +0000 UTC m=+115.341445084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.043729 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" podStartSLOduration=95.043712536 podStartE2EDuration="1m35.043712536s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:11.042288912 +0000 UTC m=+114.845818929" watchObservedRunningTime="2025-12-11 16:56:11.043712536 +0000 UTC m=+114.847242553" Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.075973 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-g94wz" podStartSLOduration=95.075956526 podStartE2EDuration="1m35.075956526s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:11.075348837 +0000 UTC m=+114.878878854" watchObservedRunningTime="2025-12-11 16:56:11.075956526 +0000 UTC m=+114.879486543" Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.092391 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" event={"ID":"048e4610-b9c6-4243-8a33-8c6156e3f025","Type":"ContainerStarted","Data":"7cecca7ddcbabbfe816bf13b17c39710c65960e010ca807260085cbc098d9556"} Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.095044 5129 patch_prober.go:28] interesting pod/downloads-747b44746d-nmqql container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.095070 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-nmqql" podUID="ed3c1960-3512-45f3-ba99-e79179060051" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.139243 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.140557 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.640539938 +0000 UTC m=+115.444069955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.173915 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.213363 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-l89p6" podStartSLOduration=6.213321805 podStartE2EDuration="6.213321805s" podCreationTimestamp="2025-12-11 16:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:11.113346275 +0000 UTC m=+114.916876282" watchObservedRunningTime="2025-12-11 16:56:11.213321805 +0000 UTC m=+115.016851822" Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.224769 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-766lv" Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.233257 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" podStartSLOduration=95.233234012 podStartE2EDuration="1m35.233234012s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:11.213757568 +0000 UTC m=+115.017287585" watchObservedRunningTime="2025-12-11 16:56:11.233234012 +0000 UTC m=+115.036764029" Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.249244 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.250641 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.750627042 +0000 UTC m=+115.554157059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.276577 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.289986 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb"] Dec 11 16:56:11 crc kubenswrapper[5129]: W1211 16:56:11.308705 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7d64c96_60a7_477c_b039_e4201dd39ea7.slice/crio-76cb1ddf68f014d6d2562b52d43c4147e2ea4926c8fb808dcd773c93121c443e WatchSource:0}: Error finding container 76cb1ddf68f014d6d2562b52d43c4147e2ea4926c8fb808dcd773c93121c443e: Status 404 returned error can't find the container with id 76cb1ddf68f014d6d2562b52d43c4147e2ea4926c8fb808dcd773c93121c443e Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.308802 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.312580 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.350156 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.350479 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.850461886 +0000 UTC m=+115.653991903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: W1211 16:56:11.373139 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230d02ac_28f0_4758_91cc_577a7c62dece.slice/crio-9d6c059d6432e83d33318a8b719f299d3685a0c53e177fa7b733880aee6bb6bf WatchSource:0}: Error finding container 9d6c059d6432e83d33318a8b719f299d3685a0c53e177fa7b733880aee6bb6bf: Status 404 returned error can't find the container with id 9d6c059d6432e83d33318a8b719f299d3685a0c53e177fa7b733880aee6bb6bf Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.393555 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q"] Dec 11 16:56:11 crc kubenswrapper[5129]: W1211 16:56:11.395929 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b9ab221_0b40_4bd9_adcb_550aac9fd590.slice/crio-d73ec3b4c28af68295359b88076e215cc4716ac3e24aefaea586cd39a47454c0 WatchSource:0}: Error finding container d73ec3b4c28af68295359b88076e215cc4716ac3e24aefaea586cd39a47454c0: Status 404 returned error can't find the container with id d73ec3b4c28af68295359b88076e215cc4716ac3e24aefaea586cd39a47454c0 Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.401658 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.423034 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xrzh8"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.453190 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.464777 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:11.964713869 +0000 UTC m=+115.768243886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.485755 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-964vr"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.511534 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.514238 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-hxxwl"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.521443 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.536688 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fptr2"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.539620 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.548621 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g"] Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.553967 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.554263 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.054245045 +0000 UTC m=+115.857775062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: W1211 16:56:11.605422 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fb633ee_b572_429f_bbb7_da362cc9f946.slice/crio-d4339ded66f7f8f7b14fc2d71e8a23ecdcae45a4919c5ca892857f5770b93b9c WatchSource:0}: Error finding container d4339ded66f7f8f7b14fc2d71e8a23ecdcae45a4919c5ca892857f5770b93b9c: Status 404 returned error can't find the container with id d4339ded66f7f8f7b14fc2d71e8a23ecdcae45a4919c5ca892857f5770b93b9c Dec 11 16:56:11 crc kubenswrapper[5129]: W1211 16:56:11.636781 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15d52990_0733_45fe_ac96_429a9503dbab.slice/crio-d0e9c525f5a86d35bb917a7d9a90c7640b92f75b3f23280244fe0c5c69de1970 WatchSource:0}: Error finding container d0e9c525f5a86d35bb917a7d9a90c7640b92f75b3f23280244fe0c5c69de1970: Status 404 returned error can't find the container with id d0e9c525f5a86d35bb917a7d9a90c7640b92f75b3f23280244fe0c5c69de1970 Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.655367 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:11 crc kubenswrapper[5129]: W1211 16:56:11.656630 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-40928ae90ed0317155b68263fde1c57dca672408a792bf0585a733dffabb6a1f WatchSource:0}: Error finding container 40928ae90ed0317155b68263fde1c57dca672408a792bf0585a733dffabb6a1f: Status 404 returned error can't find the container with id 40928ae90ed0317155b68263fde1c57dca672408a792bf0585a733dffabb6a1f Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.656831 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.156811035 +0000 UTC m=+115.960341052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.756199 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.756775 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.256758254 +0000 UTC m=+116.060288271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.859734 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.860030 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.360012595 +0000 UTC m=+116.163542612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:11 crc kubenswrapper[5129]: I1211 16:56:11.961441 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:11 crc kubenswrapper[5129]: E1211 16:56:11.962079 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.462058459 +0000 UTC m=+116.265588476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.063087 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.063434 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.563422161 +0000 UTC m=+116.366952178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.164015 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.164441 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.664410112 +0000 UTC m=+116.467940129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.213973 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.214411 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.225867 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" event={"ID":"799084e4-c663-46fd-b6d2-ce5de36e3bc6","Type":"ContainerStarted","Data":"c7e289620db830d439fa11c205bfdc56e25d9dd515b2e304e67db2145030f508"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.230783 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.235436 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"b820da66333ad5957098c7771d0544cfe6fb01e778eaafb123f27a0969b1fef0"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.242207 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" event={"ID":"a4a1ad2a-71de-426e-a205-d2cf008a150b","Type":"ContainerStarted","Data":"f47096f1805f802a4e2073300cda5b27048bcfb1f7d4aade88397de04ef0b7fd"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.259764 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" event={"ID":"102d30b5-bb95-4477-a08f-93f2ba3259f9","Type":"ContainerStarted","Data":"3287e5c5e38cd257ef325dcfb03735c823c9d48618d32b5491de4b862e833ca4"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.262087 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" event={"ID":"d3472ec7-ae60-436a-a49d-12e78eaa0d6c","Type":"ContainerStarted","Data":"34f3ddbc5faa612e64210d19dbdf11db3af52dd25200803c57a143b9a03a8d88"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.262518 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.266138 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.266658 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.766641952 +0000 UTC m=+116.570171969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.275498 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" podStartSLOduration=96.275484897 podStartE2EDuration="1m36.275484897s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.24947517 +0000 UTC m=+116.053005187" watchObservedRunningTime="2025-12-11 16:56:12.275484897 +0000 UTC m=+116.079014914" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.276895 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-kjgpd" podStartSLOduration=96.27688828 podStartE2EDuration="1m36.27688828s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.274390352 +0000 UTC m=+116.077920369" watchObservedRunningTime="2025-12-11 16:56:12.27688828 +0000 UTC m=+116.080418297" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.302224 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" event={"ID":"405a34ec-9d37-40b4-842b-7a5e0cc8342b","Type":"ContainerStarted","Data":"817e6e5aa28c2284eebae2124f69e4906958fc1c43a9c24e6da4a4d1e8cb06b3"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.335490 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-dkmb9" podStartSLOduration=96.335473606 podStartE2EDuration="1m36.335473606s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.334825746 +0000 UTC m=+116.138355763" watchObservedRunningTime="2025-12-11 16:56:12.335473606 +0000 UTC m=+116.139003623" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.369852 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.371439 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.871424031 +0000 UTC m=+116.674954048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.385299 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" podStartSLOduration=96.38528218 podStartE2EDuration="1m36.38528218s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.383999771 +0000 UTC m=+116.187529778" watchObservedRunningTime="2025-12-11 16:56:12.38528218 +0000 UTC m=+116.188812197" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.451813 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" event={"ID":"9defd256-bb45-4406-9160-111816ac3c7c","Type":"ContainerStarted","Data":"e2f5293affd547bb77f2c05e0276871fdb2271af139b44bb6dee1de11069fd6c"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.473207 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.474881 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:12.974869158 +0000 UTC m=+116.778399175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.513861 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" event={"ID":"5138f051-1943-428e-a338-8a01376e467f","Type":"ContainerStarted","Data":"083eb3e27b7fa2fe4837414fb28cc8e08cea43be0cbba56630b3c9bf47ce6ce9"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.513915 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" event={"ID":"5138f051-1943-428e-a338-8a01376e467f","Type":"ContainerStarted","Data":"512c9b4b41f38b9587ff607d087a9f6fecf9149148e3d8d821e89f68d0d6ec27"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.514617 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.546356 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" podStartSLOduration=96.546336364 podStartE2EDuration="1m36.546336364s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.471344499 +0000 UTC m=+116.274874516" watchObservedRunningTime="2025-12-11 16:56:12.546336364 +0000 UTC m=+116.349866381" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.547503 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" podStartSLOduration=96.547497799 podStartE2EDuration="1m36.547497799s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.544998612 +0000 UTC m=+116.348528629" watchObservedRunningTime="2025-12-11 16:56:12.547497799 +0000 UTC m=+116.351027816" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.576984 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.577154 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.077128339 +0000 UTC m=+116.880658356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.577490 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.578364 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.078352476 +0000 UTC m=+116.881882483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.589625 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-dg7mm" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.592616 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" event={"ID":"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5","Type":"ContainerStarted","Data":"b3cf24d5ecb37af36c72b2f6a1228c9857992ba5193494ae64ab6be394af12c7"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.606194 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" event={"ID":"4bc8ddee-33ed-4b56-a439-7ba8e704624b","Type":"ContainerStarted","Data":"ae625e10afde1157b48695536f43bf09f65a8139d35202ba82dc58fd59e6e827"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.614257 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-964vr" event={"ID":"2fb633ee-b572-429f-bbb7-da362cc9f946","Type":"ContainerStarted","Data":"d4339ded66f7f8f7b14fc2d71e8a23ecdcae45a4919c5ca892857f5770b93b9c"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.629908 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" event={"ID":"8b9ab221-0b40-4bd9-adcb-550aac9fd590","Type":"ContainerStarted","Data":"fdfe62e5f0ba071288aec3ee066b266184213a3dafcc83e37fe84ca48d3c601e"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.629945 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" event={"ID":"8b9ab221-0b40-4bd9-adcb-550aac9fd590","Type":"ContainerStarted","Data":"d73ec3b4c28af68295359b88076e215cc4716ac3e24aefaea586cd39a47454c0"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.648112 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" podStartSLOduration=96.648097898 podStartE2EDuration="1m36.648097898s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.647053717 +0000 UTC m=+116.450583734" watchObservedRunningTime="2025-12-11 16:56:12.648097898 +0000 UTC m=+116.451627915" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.674451 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nwd4t" podStartSLOduration=96.674434375 podStartE2EDuration="1m36.674434375s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.674222289 +0000 UTC m=+116.477752316" watchObservedRunningTime="2025-12-11 16:56:12.674434375 +0000 UTC m=+116.477964392" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.684180 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.685745 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.185728136 +0000 UTC m=+116.989258153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.688540 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" event={"ID":"f7d64c96-60a7-477c-b039-e4201dd39ea7","Type":"ContainerStarted","Data":"14d32a1240a4418eaad9678181642f5440f521f8c8881998040063f37c864c62"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.688572 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" event={"ID":"f7d64c96-60a7-477c-b039-e4201dd39ea7","Type":"ContainerStarted","Data":"76cb1ddf68f014d6d2562b52d43c4147e2ea4926c8fb808dcd773c93121c443e"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.709562 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" event={"ID":"37b297ba-c3c6-4b59-891a-1648996d8fd9","Type":"ContainerStarted","Data":"445cbf59a890851841b2870c40e79385d96f5521d9212deaa0ebfb3c9640ee9e"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.746842 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" event={"ID":"02b278cf-b87e-4f64-9619-748b8a89619d","Type":"ContainerStarted","Data":"5f8f288252a2492ef6bb7bc21284ea01c59f002c9d09c463dded1ecc19b70b7d"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.746888 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" event={"ID":"02b278cf-b87e-4f64-9619-748b8a89619d","Type":"ContainerStarted","Data":"3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.757755 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-gzckr" podStartSLOduration=96.757740548 podStartE2EDuration="1m36.757740548s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.756942714 +0000 UTC m=+116.560472731" watchObservedRunningTime="2025-12-11 16:56:12.757740548 +0000 UTC m=+116.561270565" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.768926 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"40928ae90ed0317155b68263fde1c57dca672408a792bf0585a733dffabb6a1f"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.778387 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-92n27" event={"ID":"979c1fa1-11b6-41df-9502-2384433fe142","Type":"ContainerStarted","Data":"0656f358ff1abc641a7f2aa8a63e26f7d8ca928aafdd1e770d1329c7f5b763be"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.778430 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-92n27" event={"ID":"979c1fa1-11b6-41df-9502-2384433fe142","Type":"ContainerStarted","Data":"2ab382adf64fad1bf75e189ecb667f7bd671cda6706ad130fdfb106b7cfbabd7"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.780358 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" podStartSLOduration=96.780344269 podStartE2EDuration="1m36.780344269s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.778282195 +0000 UTC m=+116.581812212" watchObservedRunningTime="2025-12-11 16:56:12.780344269 +0000 UTC m=+116.583874286" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.785054 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" event={"ID":"048e4610-b9c6-4243-8a33-8c6156e3f025","Type":"ContainerStarted","Data":"fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.785799 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.786027 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.787742 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.287724388 +0000 UTC m=+117.091254405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.833636 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-92n27" podStartSLOduration=7.833621161 podStartE2EDuration="7.833621161s" podCreationTimestamp="2025-12-11 16:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.832475306 +0000 UTC m=+116.636005333" watchObservedRunningTime="2025-12-11 16:56:12.833621161 +0000 UTC m=+116.637151178" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.834773 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xrzh8" event={"ID":"bc8134ed-b13c-4f8a-9821-aabcf7600ddb","Type":"ContainerStarted","Data":"efe4d849fed75de180903e0dd5d1ae279a1fe99f384f074b5b270a5b767c1747"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.868283 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" podStartSLOduration=7.868263605 podStartE2EDuration="7.868263605s" podCreationTimestamp="2025-12-11 16:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.862057742 +0000 UTC m=+116.665587759" watchObservedRunningTime="2025-12-11 16:56:12.868263605 +0000 UTC m=+116.671793622" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.889838 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.891337 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.39132125 +0000 UTC m=+117.194851267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.892329 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" event={"ID":"710e0795-32cf-4429-96de-01508f08690d","Type":"ContainerStarted","Data":"efdb452fd8221e21061a596629d72b3aa6bbf16a70aa97f0b7ac1dab8b7427fe"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.920219 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.949236 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.949663 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" event={"ID":"54f92bb1-cf65-4842-a52e-72685ca2be23","Type":"ContainerStarted","Data":"49abfe187f403683c4b24787f9259e12f95f734572c0470daa8bad182b432303"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.957676 5129 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xcrfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:56:12 crc kubenswrapper[5129]: [-]has-synced failed: reason withheld Dec 11 16:56:12 crc kubenswrapper[5129]: [+]process-running ok Dec 11 16:56:12 crc kubenswrapper[5129]: healthz check failed Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.957735 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" podUID="799084e4-c663-46fd-b6d2-ce5de36e3bc6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.969673 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" podStartSLOduration=97.969651889 podStartE2EDuration="1m37.969651889s" podCreationTimestamp="2025-12-11 16:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:12.953607211 +0000 UTC m=+116.757137228" watchObservedRunningTime="2025-12-11 16:56:12.969651889 +0000 UTC m=+116.773181906" Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.978756 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" event={"ID":"b86b2c85-dc72-4796-b7be-553b02ee6b3c","Type":"ContainerStarted","Data":"bf28c9f194caa1391ec2620008af417b2099a018b081fcfa25b78982f23bb1af"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.987878 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" event={"ID":"f368f463-ed5c-4b90-bb12-82794199158b","Type":"ContainerStarted","Data":"e42487fefab248c70769190f6313652d6fde87185f7e6508f3aaa000c2c98f95"} Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.991991 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:12 crc kubenswrapper[5129]: E1211 16:56:12.993309 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.493295891 +0000 UTC m=+117.296825908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:12 crc kubenswrapper[5129]: I1211 16:56:12.997896 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fptr2" event={"ID":"15d52990-0733-45fe-ac96-429a9503dbab","Type":"ContainerStarted","Data":"d0e9c525f5a86d35bb917a7d9a90c7640b92f75b3f23280244fe0c5c69de1970"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.043831 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" event={"ID":"230d02ac-28f0-4758-91cc-577a7c62dece","Type":"ContainerStarted","Data":"9d6c059d6432e83d33318a8b719f299d3685a0c53e177fa7b733880aee6bb6bf"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.070230 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" event={"ID":"06554c04-9d86-4813-b92c-669a3ae5a776","Type":"ContainerStarted","Data":"976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.071332 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.073486 5129 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-dkmmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.073548 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" podUID="06554c04-9d86-4813-b92c-669a3ae5a776" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.096210 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.097764 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.59774375 +0000 UTC m=+117.401273767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.113389 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" event={"ID":"0c5fd5f0-66e1-44f4-bfb0-093595462a64","Type":"ContainerStarted","Data":"c0de9c429232c931a42a48ca8e279729b8cbdbaf6c7da666e923c98fad4d2fc0"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.117839 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbzb7" podStartSLOduration=97.117822093 podStartE2EDuration="1m37.117822093s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:13.047534543 +0000 UTC m=+116.851064560" watchObservedRunningTime="2025-12-11 16:56:13.117822093 +0000 UTC m=+116.921352110" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.144173 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" event={"ID":"6d2ac0a1-c1ee-481f-ba8a-498974954c9b","Type":"ContainerStarted","Data":"dae71949652ea26afe9cf4a816291aafb9d1462ff2083e4419fda70ee9190a5b"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.157398 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" event={"ID":"652b8717-fc5d-4c51-bcb9-286947184f64","Type":"ContainerStarted","Data":"dd4318a360f0c7c94a02b7a3bd5c73e2426ce71f9ddeca1f1c33c73550871e7e"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.157459 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" event={"ID":"652b8717-fc5d-4c51-bcb9-286947184f64","Type":"ContainerStarted","Data":"32e9f0d65ccb08e97dcffc8d1792e68664ef6e7056763862e353f825dd965782"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.179678 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"bed4209bb23852fa9360c2d0aa61eedffa257541e93bb0f9eff0f02cae55937d"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.180449 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.188706 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" podStartSLOduration=97.188695509 podStartE2EDuration="1m37.188695509s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:13.124678545 +0000 UTC m=+116.928208562" watchObservedRunningTime="2025-12-11 16:56:13.188695509 +0000 UTC m=+116.992225526" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.193358 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.193383 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.197563 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.199758 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.699746222 +0000 UTC m=+117.503276239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.200333 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" event={"ID":"982ae1d1-db2c-4b76-9450-77d8ca4f6e11","Type":"ContainerStarted","Data":"f284e2f7aa536529602b7a47b5f8a7d52e805dc10b96ea59aa11ee7d18e4fea3"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.200933 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.217327 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" event={"ID":"5b699a9a-5674-4fb3-a4af-aad938990365","Type":"ContainerStarted","Data":"f8f4dba54fbb90344d4d25dde25cc69d614fc686ed097dc8c7b6d799d12508a7"} Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.231680 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" podStartSLOduration=97.231660582 podStartE2EDuration="1m37.231660582s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:13.189492704 +0000 UTC m=+116.993022731" watchObservedRunningTime="2025-12-11 16:56:13.231660582 +0000 UTC m=+117.035190599" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.231765 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.235675 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-np8vg" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.259656 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5pj2z" podStartSLOduration=97.259637849 podStartE2EDuration="1m37.259637849s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:13.231989082 +0000 UTC m=+117.035519099" watchObservedRunningTime="2025-12-11 16:56:13.259637849 +0000 UTC m=+117.063167866" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.260339 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-2rsfc" podStartSLOduration=97.260335441 podStartE2EDuration="1m37.260335441s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:13.260062043 +0000 UTC m=+117.063592050" watchObservedRunningTime="2025-12-11 16:56:13.260335441 +0000 UTC m=+117.063865458" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.262458 5129 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6r5mg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded" start-of-body= Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.262493 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" podUID="d3472ec7-ae60-436a-a49d-12e78eaa0d6c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.299145 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.300241 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.800225188 +0000 UTC m=+117.603755195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.328754 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" podStartSLOduration=97.328733501 podStartE2EDuration="1m37.328733501s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:13.325008496 +0000 UTC m=+117.128538513" watchObservedRunningTime="2025-12-11 16:56:13.328733501 +0000 UTC m=+117.132263528" Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.400951 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.401333 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:13.901316871 +0000 UTC m=+117.704846889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.414886 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-98qvs"] Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.502194 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.502361 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.002336384 +0000 UTC m=+117.805866401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.502445 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.502846 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.002816259 +0000 UTC m=+117.806346276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.603523 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.603681 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.103652975 +0000 UTC m=+117.907183002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.603923 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.604235 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.104224783 +0000 UTC m=+117.907754800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.705007 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.705188 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.205161682 +0000 UTC m=+118.008691699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.705379 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.705817 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.205802763 +0000 UTC m=+118.009332780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.806430 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.806870 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.306854986 +0000 UTC m=+118.110385003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.908154 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:13 crc kubenswrapper[5129]: E1211 16:56:13.908476 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.408464305 +0000 UTC m=+118.211994322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.920712 5129 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-xcrfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:56:13 crc kubenswrapper[5129]: [-]has-synced failed: reason withheld Dec 11 16:56:13 crc kubenswrapper[5129]: [+]process-running ok Dec 11 16:56:13 crc kubenswrapper[5129]: healthz check failed Dec 11 16:56:13 crc kubenswrapper[5129]: I1211 16:56:13.920774 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" podUID="799084e4-c663-46fd-b6d2-ce5de36e3bc6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.009902 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.010208 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.51018679 +0000 UTC m=+118.313716807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.094905 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.116383 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.116870 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.616854627 +0000 UTC m=+118.420384644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.217330 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.217589 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.717557379 +0000 UTC m=+118.521087396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.242156 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"7c6048843ea905d2667b80c129c6f573796c0ce6e53d412dc5f8acc9ea4760ba"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.247285 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-86b4q" event={"ID":"982ae1d1-db2c-4b76-9450-77d8ca4f6e11","Type":"ContainerStarted","Data":"929342298dcfe301e3bcaad026c2731b95235b5d0ad6a410331f30a5e583e10e"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.251247 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" event={"ID":"5b699a9a-5674-4fb3-a4af-aad938990365","Type":"ContainerStarted","Data":"575dec695e585aea55b1b79af464b8959621c3f0701f7a993b8b432172190a3d"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.253691 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"e27e6a231d32b76a7e83e99d9ecbd0b7c025f884eaf4b2cd394993739c5b785c"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.256646 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" event={"ID":"102d30b5-bb95-4477-a08f-93f2ba3259f9","Type":"ContainerStarted","Data":"82f5eb86c1cf13e629d7436a3fec318b2ce14ac60db9e05178a1ff81dabce7e1"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.256703 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" event={"ID":"102d30b5-bb95-4477-a08f-93f2ba3259f9","Type":"ContainerStarted","Data":"8f43980a25968607bff1a8bbad5d625a27bc0ad537697ea023b59a927d17dbac"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.264851 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-p9m8g" event={"ID":"9defd256-bb45-4406-9160-111816ac3c7c","Type":"ContainerStarted","Data":"bc604b1c7b831a7aeb7ca2dc85bd9757adf5d28ec3a7216fe51e7737aeb1beec"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.268017 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" event={"ID":"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5","Type":"ContainerStarted","Data":"d6eac77b8f493e1b66ae2ff135868970020855365529657f53486424346f1786"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.268064 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" event={"ID":"92a20d5e-2840-4b8f-9cc9-5c414c1ea1f5","Type":"ContainerStarted","Data":"b488be107102cf140b65796d1320e2eeeb940047ef48cc5bbb4f5d3f8ce28695"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.273494 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-964vr" event={"ID":"2fb633ee-b572-429f-bbb7-da362cc9f946","Type":"ContainerStarted","Data":"b029e5c3784adbe6553943e5db53aa6f7642c716920dbd90b3114d6897035777"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.276683 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-2zp55" podStartSLOduration=98.276665202 podStartE2EDuration="1m38.276665202s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.27660115 +0000 UTC m=+118.080131167" watchObservedRunningTime="2025-12-11 16:56:14.276665202 +0000 UTC m=+118.080195219" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.278349 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" event={"ID":"f7d64c96-60a7-477c-b039-e4201dd39ea7","Type":"ContainerStarted","Data":"bbad635d872fbe33fc7bf3020022c87c777c89e4af111c972b81e204ddf5c2d0"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.278975 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.282848 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42608: no serving certificate available for the kubelet" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.286301 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" event={"ID":"891601c4-e560-443f-a221-52b6fdc85cd3","Type":"ContainerStarted","Data":"ca3f4dc1598bc03eb9fe859d76d5a25b252b704b4afed19735e82e2f72ecc1eb"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.293493 5129 generic.go:358] "Generic (PLEG): container finished" podID="02b278cf-b87e-4f64-9619-748b8a89619d" containerID="5f8f288252a2492ef6bb7bc21284ea01c59f002c9d09c463dded1ecc19b70b7d" exitCode=0 Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.293596 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" event={"ID":"02b278cf-b87e-4f64-9619-748b8a89619d","Type":"ContainerDied","Data":"5f8f288252a2492ef6bb7bc21284ea01c59f002c9d09c463dded1ecc19b70b7d"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.305771 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"00f59f5385df1a65b56b5c133f423ab8d465f63a1ab995f62d75df453a502f48"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.309118 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xrzh8" event={"ID":"bc8134ed-b13c-4f8a-9821-aabcf7600ddb","Type":"ContainerStarted","Data":"c4064458e80b77fb78a56caf6eca6270b7dec05de741493657b691fc0aebeb30"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.309483 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.319403 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.320870 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.820854962 +0000 UTC m=+118.624384979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.329451 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rvbh6" event={"ID":"710e0795-32cf-4429-96de-01508f08690d","Type":"ContainerStarted","Data":"ea3633045a2b29e62fd98dd7ce3de86508b380f5d3967a5c3a6260be3585639b"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.356781 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-499sg" podStartSLOduration=98.356760885 podStartE2EDuration="1m38.356760885s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.340042616 +0000 UTC m=+118.143572633" watchObservedRunningTime="2025-12-11 16:56:14.356760885 +0000 UTC m=+118.160290902" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.383060 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-w5mtv" podStartSLOduration=98.38303737 podStartE2EDuration="1m38.38303737s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.380345376 +0000 UTC m=+118.183875403" watchObservedRunningTime="2025-12-11 16:56:14.38303737 +0000 UTC m=+118.186567397" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.402842 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" event={"ID":"b86b2c85-dc72-4796-b7be-553b02ee6b3c","Type":"ContainerStarted","Data":"b00230aadecbdee5e80617974f4d7210e7b583f78e380096eaa6a54956d7dcd6"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.403055 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42612: no serving certificate available for the kubelet" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.403070 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" event={"ID":"b86b2c85-dc72-4796-b7be-553b02ee6b3c","Type":"ContainerStarted","Data":"2df9945b49a22234fb765989c07d6026259cde3ec229fd658517e40189116a71"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.421052 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.421360 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.921306876 +0000 UTC m=+118.724836893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.421463 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.422987 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:14.922974297 +0000 UTC m=+118.726504304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.437846 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-l44p8" event={"ID":"f368f463-ed5c-4b90-bb12-82794199158b","Type":"ContainerStarted","Data":"5ae5ef91216e1264199d2083efce81e5215c564cf235128c44a21d39b3ad79a0"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.464801 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fptr2" event={"ID":"15d52990-0733-45fe-ac96-429a9503dbab","Type":"ContainerStarted","Data":"1838f4914a04262d34bcad893c7b8e5229c2d9f2d405679c1db82288c8506f34"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.464854 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fptr2" event={"ID":"15d52990-0733-45fe-ac96-429a9503dbab","Type":"ContainerStarted","Data":"5e1a6ef48196a704ffc0775bbfcb3336849af3cce65daba9a41e513036a5caf2"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.492589 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-964vr" podStartSLOduration=98.492570716 podStartE2EDuration="1m38.492570716s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.462545724 +0000 UTC m=+118.266075741" watchObservedRunningTime="2025-12-11 16:56:14.492570716 +0000 UTC m=+118.296100743" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.506724 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42624: no serving certificate available for the kubelet" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.511422 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" event={"ID":"230d02ac-28f0-4758-91cc-577a7c62dece","Type":"ContainerStarted","Data":"ed33eb03024cf46033684faaca7a1c85e6a00986258f17186574b321dab22c96"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.529118 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.530284 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.030253894 +0000 UTC m=+118.833783911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.532455 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" podStartSLOduration=98.532441381 podStartE2EDuration="1m38.532441381s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.532002649 +0000 UTC m=+118.335532666" watchObservedRunningTime="2025-12-11 16:56:14.532441381 +0000 UTC m=+118.335971398" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.533362 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xrzh8" podStartSLOduration=9.53335681 podStartE2EDuration="9.53335681s" podCreationTimestamp="2025-12-11 16:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.491957987 +0000 UTC m=+118.295488014" watchObservedRunningTime="2025-12-11 16:56:14.53335681 +0000 UTC m=+118.336886837" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.565196 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" event={"ID":"0c5fd5f0-66e1-44f4-bfb0-093595462a64","Type":"ContainerStarted","Data":"b5ed495917bf8404a6578df3e32c121377c816de37eebe553fd2ad9582354137"} Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.574961 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.575491 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6r5mg" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.579033 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-xhkdc" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.610654 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42632: no serving certificate available for the kubelet" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.612536 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-hxxwl" podStartSLOduration=98.612510274 podStartE2EDuration="1m38.612510274s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.611187783 +0000 UTC m=+118.414717800" watchObservedRunningTime="2025-12-11 16:56:14.612510274 +0000 UTC m=+118.416040291" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.631397 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.637683 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.137668204 +0000 UTC m=+118.941198221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.710850 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42638: no serving certificate available for the kubelet" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.732912 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.733279 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.233244868 +0000 UTC m=+119.036774885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.733890 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.734196 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.234186796 +0000 UTC m=+119.037716813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.742420 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-d5lnt" podStartSLOduration=98.742401851 podStartE2EDuration="1m38.742401851s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.730974856 +0000 UTC m=+118.534504873" watchObservedRunningTime="2025-12-11 16:56:14.742401851 +0000 UTC m=+118.545931868" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.835206 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.835461 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.335446786 +0000 UTC m=+119.138976803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.871320 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9q2rp" podStartSLOduration=98.871306798 podStartE2EDuration="1m38.871306798s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.817420758 +0000 UTC m=+118.620950775" watchObservedRunningTime="2025-12-11 16:56:14.871306798 +0000 UTC m=+118.674836815" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.871584 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fptr2" podStartSLOduration=98.871581756 podStartE2EDuration="1m38.871581756s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:14.869222093 +0000 UTC m=+118.672752110" watchObservedRunningTime="2025-12-11 16:56:14.871581756 +0000 UTC m=+118.675111773" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.883426 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42646: no serving certificate available for the kubelet" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.888885 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mjbrt"] Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.899365 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.907526 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjbrt"] Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.909080 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.922792 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.936393 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcdh\" (UniqueName: \"kubernetes.io/projected/55afdb67-75d7-4db9-bee0-95e43c4a07bd-kube-api-access-cfcdh\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.936677 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-utilities\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.936775 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-catalog-content\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:14 crc kubenswrapper[5129]: I1211 16:56:14.936866 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:14 crc kubenswrapper[5129]: E1211 16:56:14.937163 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.437151649 +0000 UTC m=+119.240681666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.037766 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.038014 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.537982376 +0000 UTC m=+119.341512393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.038078 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcdh\" (UniqueName: \"kubernetes.io/projected/55afdb67-75d7-4db9-bee0-95e43c4a07bd-kube-api-access-cfcdh\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.038293 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-utilities\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.038314 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-catalog-content\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.038391 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.038764 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-utilities\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.038797 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.538781301 +0000 UTC m=+119.342311318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.038883 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-catalog-content\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.065473 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcdh\" (UniqueName: \"kubernetes.io/projected/55afdb67-75d7-4db9-bee0-95e43c4a07bd-kube-api-access-cfcdh\") pod \"certified-operators-mjbrt\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.086292 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.104008 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nc2p6"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.140022 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.144702 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.644680154 +0000 UTC m=+119.448210171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.144772 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.145203 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.64518697 +0000 UTC m=+119.448716987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.182582 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42648: no serving certificate available for the kubelet" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.223111 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.252044 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.252298 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.752282949 +0000 UTC m=+119.555812966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.356951 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.357306 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.857288766 +0000 UTC m=+119.660818783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.420834 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.421275 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.427419 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc2p6"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.427456 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.427468 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n7wt5"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.428403 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.430741 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.435355 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.435780 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.436312 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.438245 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7wt5"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.461395 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.461696 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-catalog-content\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.461735 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79rw\" (UniqueName: \"kubernetes.io/projected/b51f4fcc-9be5-4925-b35e-75dca772e189-kube-api-access-l79rw\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.461795 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-utilities\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.461938 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:15.961917879 +0000 UTC m=+119.765447896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.506679 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gq47r"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.548371 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq47r"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.548585 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564016 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-catalog-content\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564058 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l79rw\" (UniqueName: \"kubernetes.io/projected/b51f4fcc-9be5-4925-b35e-75dca772e189-kube-api-access-l79rw\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564095 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d152f2f-4642-428b-b6da-7cc4f687eb71-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564122 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d152f2f-4642-428b-b6da-7cc4f687eb71-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564140 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-catalog-content\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564181 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564198 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-utilities\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564241 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhz4\" (UniqueName: \"kubernetes.io/projected/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-kube-api-access-wmhz4\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564272 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-utilities\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.564715 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-catalog-content\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.565232 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.065219732 +0000 UTC m=+119.868749749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.565589 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-utilities\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.575057 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42660: no serving certificate available for the kubelet" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.582018 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xrzh8" event={"ID":"bc8134ed-b13c-4f8a-9821-aabcf7600ddb","Type":"ContainerStarted","Data":"741d51e93683773de0b7ae954c08d53d7afc77a440827c1bae7fd4b5e75cf201"} Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.585448 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" gracePeriod=30 Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.590156 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.605791 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-xcrfz" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.636565 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l79rw\" (UniqueName: \"kubernetes.io/projected/b51f4fcc-9be5-4925-b35e-75dca772e189-kube-api-access-l79rw\") pod \"community-operators-nc2p6\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.665670 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.666232 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d152f2f-4642-428b-b6da-7cc4f687eb71-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.666397 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-utilities\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.666473 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfm9s\" (UniqueName: \"kubernetes.io/projected/c524108b-2e35-4faa-9711-c13139f1321f-kube-api-access-lfm9s\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.666504 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-catalog-content\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.667047 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhz4\" (UniqueName: \"kubernetes.io/projected/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-kube-api-access-wmhz4\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.667147 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-catalog-content\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.667297 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-utilities\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.667734 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d152f2f-4642-428b-b6da-7cc4f687eb71-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.668106 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.168089862 +0000 UTC m=+119.971619879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.672413 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d152f2f-4642-428b-b6da-7cc4f687eb71-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.676731 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-utilities\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.680278 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-catalog-content\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.699479 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhz4\" (UniqueName: \"kubernetes.io/projected/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-kube-api-access-wmhz4\") pod \"certified-operators-n7wt5\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.710241 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d152f2f-4642-428b-b6da-7cc4f687eb71-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.768729 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-utilities\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.768763 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfm9s\" (UniqueName: \"kubernetes.io/projected/c524108b-2e35-4faa-9711-c13139f1321f-kube-api-access-lfm9s\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.768790 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.768834 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-catalog-content\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.769218 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-catalog-content\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.769351 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-utilities\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.769476 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.269458145 +0000 UTC m=+120.072988162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.784905 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.804573 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.809915 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfm9s\" (UniqueName: \"kubernetes.io/projected/c524108b-2e35-4faa-9711-c13139f1321f-kube-api-access-lfm9s\") pod \"community-operators-gq47r\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.828364 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.844243 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mjbrt"] Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.869382 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.869814 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.369792035 +0000 UTC m=+120.173322052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.882861 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.962187 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:15 crc kubenswrapper[5129]: I1211 16:56:15.970498 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:15 crc kubenswrapper[5129]: E1211 16:56:15.970830 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.470815238 +0000 UTC m=+120.274345255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.071353 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02b278cf-b87e-4f64-9619-748b8a89619d-secret-volume\") pod \"02b278cf-b87e-4f64-9619-748b8a89619d\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.071675 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.071745 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02b278cf-b87e-4f64-9619-748b8a89619d-config-volume\") pod \"02b278cf-b87e-4f64-9619-748b8a89619d\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.071816 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.571791589 +0000 UTC m=+120.375321606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.071974 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqx72\" (UniqueName: \"kubernetes.io/projected/02b278cf-b87e-4f64-9619-748b8a89619d-kube-api-access-qqx72\") pod \"02b278cf-b87e-4f64-9619-748b8a89619d\" (UID: \"02b278cf-b87e-4f64-9619-748b8a89619d\") " Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.072192 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.072652 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.572641324 +0000 UTC m=+120.376171341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.072658 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b278cf-b87e-4f64-9619-748b8a89619d-config-volume" (OuterVolumeSpecName: "config-volume") pod "02b278cf-b87e-4f64-9619-748b8a89619d" (UID: "02b278cf-b87e-4f64-9619-748b8a89619d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.072955 5129 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02b278cf-b87e-4f64-9619-748b8a89619d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.081320 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b278cf-b87e-4f64-9619-748b8a89619d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02b278cf-b87e-4f64-9619-748b8a89619d" (UID: "02b278cf-b87e-4f64-9619-748b8a89619d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.085396 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b278cf-b87e-4f64-9619-748b8a89619d-kube-api-access-qqx72" (OuterVolumeSpecName: "kube-api-access-qqx72") pod "02b278cf-b87e-4f64-9619-748b8a89619d" (UID: "02b278cf-b87e-4f64-9619-748b8a89619d"). InnerVolumeSpecName "kube-api-access-qqx72". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.174311 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.175080 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.67505854 +0000 UTC m=+120.478588567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.175137 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.175187 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqx72\" (UniqueName: \"kubernetes.io/projected/02b278cf-b87e-4f64-9619-748b8a89619d-kube-api-access-qqx72\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.175200 5129 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02b278cf-b87e-4f64-9619-748b8a89619d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.175441 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.675432911 +0000 UTC m=+120.478962938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.249775 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42670: no serving certificate available for the kubelet" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.276456 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.276823 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.776793284 +0000 UTC m=+120.580323301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.329054 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7wt5"] Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.378218 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.378573 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:16.878557519 +0000 UTC m=+120.682087536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.410733 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc2p6"] Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.482235 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.500243 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.000217501 +0000 UTC m=+120.803747518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.545932 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.589876 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.590258 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.090244772 +0000 UTC m=+120.893774789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.623154 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.624059 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424525-7fxt9" event={"ID":"02b278cf-b87e-4f64-9619-748b8a89619d","Type":"ContainerDied","Data":"3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8"} Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.624090 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.642781 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc2p6" event={"ID":"b51f4fcc-9be5-4925-b35e-75dca772e189","Type":"ContainerStarted","Data":"b6957f66d878d2c7a8427fe6b289c618c4f6caba2b837e5adf122b23a89ce9e6"} Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.644448 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq47r"] Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.680918 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d152f2f-4642-428b-b6da-7cc4f687eb71","Type":"ContainerStarted","Data":"fa68b844af264fb3e1a167af5f30d13c62d06730ce699590531011d78b7ee7a6"} Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.685380 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wt5" event={"ID":"d9800ee9-8362-47af-ae1f-b4b2c91d08a1","Type":"ContainerStarted","Data":"29b9a519fa9aeac863d1b530179617669eab440bfc441aabf904deebe211771a"} Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.696111 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.696465 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.196447675 +0000 UTC m=+120.999977692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.740798 5129 generic.go:358] "Generic (PLEG): container finished" podID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerID="22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db" exitCode=0 Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.742558 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjbrt" event={"ID":"55afdb67-75d7-4db9-bee0-95e43c4a07bd","Type":"ContainerDied","Data":"22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db"} Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.742593 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjbrt" event={"ID":"55afdb67-75d7-4db9-bee0-95e43c4a07bd","Type":"ContainerStarted","Data":"33283e4d339113fd865282947bd428f139cec50c0cb633e540a138e200c554c5"} Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.797379 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.800426 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.300414318 +0000 UTC m=+121.103944525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.880081 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7khq8"] Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.880896 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02b278cf-b87e-4f64-9619-748b8a89619d" containerName="collect-profiles" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.880922 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b278cf-b87e-4f64-9619-748b8a89619d" containerName="collect-profiles" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.881080 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="02b278cf-b87e-4f64-9619-748b8a89619d" containerName="collect-profiles" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.899245 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:16 crc kubenswrapper[5129]: E1211 16:56:16.899777 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.399755198 +0000 UTC m=+121.203285215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.902176 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.909033 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 11 16:56:16 crc kubenswrapper[5129]: I1211 16:56:16.909060 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7khq8"] Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.001621 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.002000 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.501984188 +0000 UTC m=+121.305514205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.102469 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.102884 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.602862325 +0000 UTC m=+121.406392342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.103225 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.103261 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-utilities\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.103335 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-catalog-content\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.103418 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdst\" (UniqueName: \"kubernetes.io/projected/7e5898b2-33b2-465b-bf38-07d11c8f67f1-kube-api-access-czdst\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.103815 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.603804665 +0000 UTC m=+121.407334682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.126716 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9800ee9_8362_47af_ae1f_b4b2c91d08a1.slice/crio-39f593f8fdc8a53347077d0a9692c3ffacc85badeaf9e7734ea92aeba7768069.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b278cf_b87e_4f64_9619_748b8a89619d.slice/crio-3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb51f4fcc_9be5_4925_b35e_75dca772e189.slice/crio-4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9800ee9_8362_47af_ae1f_b4b2c91d08a1.slice/crio-conmon-39f593f8fdc8a53347077d0a9692c3ffacc85badeaf9e7734ea92aeba7768069.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb51f4fcc_9be5_4925_b35e_75dca772e189.slice/crio-conmon-4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a.scope\": RecentStats: unable to find data in memory cache]" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.184483 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.185110 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.190441 5129 patch_prober.go:28] interesting pod/console-64d44f6ddf-jhm42 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.190499 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-jhm42" podUID="83730945-5deb-4b14-988b-24d05e851543" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.207476 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.207616 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.707589023 +0000 UTC m=+121.511119040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.207707 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-czdst\" (UniqueName: \"kubernetes.io/projected/7e5898b2-33b2-465b-bf38-07d11c8f67f1-kube-api-access-czdst\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.207758 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.207786 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-utilities\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.207830 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-catalog-content\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.208085 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.708077758 +0000 UTC m=+121.511607775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.208260 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-utilities\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.210790 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-catalog-content\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.234308 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-czdst\" (UniqueName: \"kubernetes.io/projected/7e5898b2-33b2-465b-bf38-07d11c8f67f1-kube-api-access-czdst\") pod \"redhat-marketplace-7khq8\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.281469 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9tssw"] Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.290023 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.291895 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9tssw"] Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.308398 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.309192 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.809163062 +0000 UTC m=+121.612693079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.410674 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.410776 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-catalog-content\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.410819 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-utilities\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.410843 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7j8p\" (UniqueName: \"kubernetes.io/projected/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-kube-api-access-m7j8p\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.411228 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:17.911211956 +0000 UTC m=+121.714741973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.416243 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.511780 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.511994 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-catalog-content\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.512025 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-utilities\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.512041 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7j8p\" (UniqueName: \"kubernetes.io/projected/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-kube-api-access-m7j8p\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.512397 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.012379762 +0000 UTC m=+121.815909779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.512778 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-catalog-content\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.512981 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-utilities\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.538542 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7j8p\" (UniqueName: \"kubernetes.io/projected/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-kube-api-access-m7j8p\") pod \"redhat-marketplace-9tssw\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.613989 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42676: no serving certificate available for the kubelet" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.615338 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.615686 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.115674505 +0000 UTC m=+121.919204522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.624761 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.717990 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.718384 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.218350008 +0000 UTC m=+122.021880025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.785632 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d152f2f-4642-428b-b6da-7cc4f687eb71","Type":"ContainerStarted","Data":"8a58fd409acf4dae7c6b89e7a6aef3a2e35ad64ca121c510e5f8dc353927d2c9"} Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.789167 5129 generic.go:358] "Generic (PLEG): container finished" podID="c524108b-2e35-4faa-9711-c13139f1321f" containerID="9682125c0bb0a473b1ffd1c7e720d12c01a30cc1ef7db47355151b3a0b85e51f" exitCode=0 Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.789282 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq47r" event={"ID":"c524108b-2e35-4faa-9711-c13139f1321f","Type":"ContainerDied","Data":"9682125c0bb0a473b1ffd1c7e720d12c01a30cc1ef7db47355151b3a0b85e51f"} Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.789308 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq47r" event={"ID":"c524108b-2e35-4faa-9711-c13139f1321f","Type":"ContainerStarted","Data":"1a87bb511b417e3908be82c211160e4751c447f958d29f8689b73c4a1963fa05"} Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.796035 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7khq8"] Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.809446 5129 generic.go:358] "Generic (PLEG): container finished" podID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerID="39f593f8fdc8a53347077d0a9692c3ffacc85badeaf9e7734ea92aeba7768069" exitCode=0 Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.809631 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wt5" event={"ID":"d9800ee9-8362-47af-ae1f-b4b2c91d08a1","Type":"ContainerDied","Data":"39f593f8fdc8a53347077d0a9692c3ffacc85badeaf9e7734ea92aeba7768069"} Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.811709 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.811689902 podStartE2EDuration="2.811689902s" podCreationTimestamp="2025-12-11 16:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:17.810760683 +0000 UTC m=+121.614290700" watchObservedRunningTime="2025-12-11 16:56:17.811689902 +0000 UTC m=+121.615219919" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.824287 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.826042 5129 generic.go:358] "Generic (PLEG): container finished" podID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerID="4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a" exitCode=0 Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.826143 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.32613046 +0000 UTC m=+122.129660477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.826271 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc2p6" event={"ID":"b51f4fcc-9be5-4925-b35e-75dca772e189","Type":"ContainerDied","Data":"4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a"} Dec 11 16:56:17 crc kubenswrapper[5129]: W1211 16:56:17.845732 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e5898b2_33b2_465b_bf38_07d11c8f67f1.slice/crio-bb8ed1027c2f1c9b535345acd61e47f7299fddb1d9b5f7fa0b449e4acd1b589c WatchSource:0}: Error finding container bb8ed1027c2f1c9b535345acd61e47f7299fddb1d9b5f7fa0b449e4acd1b589c: Status 404 returned error can't find the container with id bb8ed1027c2f1c9b535345acd61e47f7299fddb1d9b5f7fa0b449e4acd1b589c Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.925328 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.925895 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.425877872 +0000 UTC m=+122.229407889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:17 crc kubenswrapper[5129]: I1211 16:56:17.926047 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:17 crc kubenswrapper[5129]: E1211 16:56:17.927120 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.427110691 +0000 UTC m=+122.230640708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.021039 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9tssw"] Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.028037 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:18 crc kubenswrapper[5129]: E1211 16:56:18.028473 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.528453523 +0000 UTC m=+122.331983530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.062371 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dcf8z"] Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.068865 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.076328 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.082312 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcf8z"] Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.130818 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.130871 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b672j\" (UniqueName: \"kubernetes.io/projected/2cc34f9f-085b-445c-b10d-e6241e66f722-kube-api-access-b672j\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.130924 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-utilities\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.130943 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-catalog-content\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: E1211 16:56:18.131269 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.63125391 +0000 UTC m=+122.434783927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.222226 5129 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.231925 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.232159 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-catalog-content\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.232275 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b672j\" (UniqueName: \"kubernetes.io/projected/2cc34f9f-085b-445c-b10d-e6241e66f722-kube-api-access-b672j\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.232375 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-utilities\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.233004 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-utilities\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: E1211 16:56:18.233344 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.733314564 +0000 UTC m=+122.536844601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.234157 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-catalog-content\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.262970 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b672j\" (UniqueName: \"kubernetes.io/projected/2cc34f9f-085b-445c-b10d-e6241e66f722-kube-api-access-b672j\") pod \"redhat-operators-dcf8z\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.334197 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:18 crc kubenswrapper[5129]: E1211 16:56:18.334486 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.83447448 +0000 UTC m=+122.638004497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.394948 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.435231 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:18 crc kubenswrapper[5129]: E1211 16:56:18.435391 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.935371239 +0000 UTC m=+122.738901256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.435636 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:18 crc kubenswrapper[5129]: E1211 16:56:18.436149 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:56:18.936129462 +0000 UTC m=+122.739659479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-87vjc" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.472555 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zvjfk"] Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.479849 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.482097 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvjfk"] Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.539166 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:18 crc kubenswrapper[5129]: E1211 16:56:18.539804 5129 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:56:19.039787506 +0000 UTC m=+122.843317523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.637625 5129 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T16:56:18.222249551Z","UUID":"030f104f-b689-4dcc-9f49-b373d46c1c63","Handler":null,"Name":"","Endpoint":""} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.641160 5129 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.641188 5129 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.641552 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.641613 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md8rp\" (UniqueName: \"kubernetes.io/projected/836875ec-a9b9-41eb-9552-b8af7e552247-kube-api-access-md8rp\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.641669 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-utilities\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.641689 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-catalog-content\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.645657 5129 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.645700 5129 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.706391 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-87vjc\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.716039 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcf8z"] Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.742919 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.744826 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-md8rp\" (UniqueName: \"kubernetes.io/projected/836875ec-a9b9-41eb-9552-b8af7e552247-kube-api-access-md8rp\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.744905 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-utilities\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.744927 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-catalog-content\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.746480 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-catalog-content\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.746598 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-utilities\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.751820 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.774489 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.779092 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-md8rp\" (UniqueName: \"kubernetes.io/projected/836875ec-a9b9-41eb-9552-b8af7e552247-kube-api-access-md8rp\") pod \"redhat-operators-zvjfk\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.784282 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.802137 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.835557 5129 generic.go:358] "Generic (PLEG): container finished" podID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerID="468901ef9253541e6510cb76e65a92006ef39bf97ba30900da568191786fd827" exitCode=0 Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.835697 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9tssw" event={"ID":"c6af231a-d33c-4f6f-8278-7aa1f6bc3635","Type":"ContainerDied","Data":"468901ef9253541e6510cb76e65a92006ef39bf97ba30900da568191786fd827"} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.835766 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9tssw" event={"ID":"c6af231a-d33c-4f6f-8278-7aa1f6bc3635","Type":"ContainerStarted","Data":"ffcc277da8f6a01992a22397af949bd3ea4fddd3febeadffd790bb4a090755be"} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.836466 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcf8z" event={"ID":"2cc34f9f-085b-445c-b10d-e6241e66f722","Type":"ContainerStarted","Data":"09de71e3f6d9aa7d4231c4f73ae5b1096e745212672541fdd0a0b89c80555b7e"} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.838855 5129 generic.go:358] "Generic (PLEG): container finished" podID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerID="45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf" exitCode=0 Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.838986 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7khq8" event={"ID":"7e5898b2-33b2-465b-bf38-07d11c8f67f1","Type":"ContainerDied","Data":"45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf"} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.839029 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7khq8" event={"ID":"7e5898b2-33b2-465b-bf38-07d11c8f67f1","Type":"ContainerStarted","Data":"bb8ed1027c2f1c9b535345acd61e47f7299fddb1d9b5f7fa0b449e4acd1b589c"} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.840843 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" event={"ID":"891601c4-e560-443f-a221-52b6fdc85cd3","Type":"ContainerStarted","Data":"8c78f69318487bc533f4613b97724d9e664607d4431b83dae1e1744782904742"} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.840880 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" event={"ID":"891601c4-e560-443f-a221-52b6fdc85cd3","Type":"ContainerStarted","Data":"d7dab337b8b4c5b5207a8405fb874b7d31d2476c0c663353ac6767bb8681dc28"} Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.876464 5129 generic.go:358] "Generic (PLEG): container finished" podID="2d152f2f-4642-428b-b6da-7cc4f687eb71" containerID="8a58fd409acf4dae7c6b89e7a6aef3a2e35ad64ca121c510e5f8dc353927d2c9" exitCode=0 Dec 11 16:56:18 crc kubenswrapper[5129]: I1211 16:56:18.876583 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d152f2f-4642-428b-b6da-7cc4f687eb71","Type":"ContainerDied","Data":"8a58fd409acf4dae7c6b89e7a6aef3a2e35ad64ca121c510e5f8dc353927d2c9"} Dec 11 16:56:19 crc kubenswrapper[5129]: I1211 16:56:19.062871 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-87vjc"] Dec 11 16:56:19 crc kubenswrapper[5129]: I1211 16:56:19.119700 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvjfk"] Dec 11 16:56:19 crc kubenswrapper[5129]: W1211 16:56:19.143611 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod836875ec_a9b9_41eb_9552_b8af7e552247.slice/crio-65a5d953619923ec8a5b2e0ace035e84fb5bb9a48dbe2bfb41a5bf8465ac640b WatchSource:0}: Error finding container 65a5d953619923ec8a5b2e0ace035e84fb5bb9a48dbe2bfb41a5bf8465ac640b: Status 404 returned error can't find the container with id 65a5d953619923ec8a5b2e0ace035e84fb5bb9a48dbe2bfb41a5bf8465ac640b Dec 11 16:56:19 crc kubenswrapper[5129]: I1211 16:56:19.538840 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 11 16:56:20 crc kubenswrapper[5129]: I1211 16:56:20.218829 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42688: no serving certificate available for the kubelet" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.003906 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.008343 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.009715 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.017207 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.018065 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerStarted","Data":"65a5d953619923ec8a5b2e0ace035e84fb5bb9a48dbe2bfb41a5bf8465ac640b"} Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.018117 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.018137 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" event={"ID":"771edef5-cdf9-463f-8fa5-824e3d0f0f0d","Type":"ContainerStarted","Data":"b7b1baf791d386c8e2e51d614395825d7a73836b5675ad7130b39ef99de804b2"} Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.099407 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-nmqql" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.103243 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.103357 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.204734 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.204853 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.204973 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.222411 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.254134 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.305671 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d152f2f-4642-428b-b6da-7cc4f687eb71-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d152f2f-4642-428b-b6da-7cc4f687eb71" (UID: "2d152f2f-4642-428b-b6da-7cc4f687eb71"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.305541 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d152f2f-4642-428b-b6da-7cc4f687eb71-kubelet-dir\") pod \"2d152f2f-4642-428b-b6da-7cc4f687eb71\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.305885 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d152f2f-4642-428b-b6da-7cc4f687eb71-kube-api-access\") pod \"2d152f2f-4642-428b-b6da-7cc4f687eb71\" (UID: \"2d152f2f-4642-428b-b6da-7cc4f687eb71\") " Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.306759 5129 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d152f2f-4642-428b-b6da-7cc4f687eb71-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.313534 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d152f2f-4642-428b-b6da-7cc4f687eb71-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d152f2f-4642-428b-b6da-7cc4f687eb71" (UID: "2d152f2f-4642-428b-b6da-7cc4f687eb71"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.326351 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.407570 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d152f2f-4642-428b-b6da-7cc4f687eb71-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.508144 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.916558 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" event={"ID":"891601c4-e560-443f-a221-52b6fdc85cd3","Type":"ContainerStarted","Data":"689cc8990a6524bba2dad4b0dec7511dbc18e56a3359fcad9a5a4cc47d0d9380"} Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.917879 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2","Type":"ContainerStarted","Data":"55106c32f307f67569784f4ad25b2b9ce94b9c2651f93d4c5fc5ecde0bfa04b2"} Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.919603 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d152f2f-4642-428b-b6da-7cc4f687eb71","Type":"ContainerDied","Data":"fa68b844af264fb3e1a167af5f30d13c62d06730ce699590531011d78b7ee7a6"} Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.919631 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa68b844af264fb3e1a167af5f30d13c62d06730ce699590531011d78b7ee7a6" Dec 11 16:56:21 crc kubenswrapper[5129]: I1211 16:56:21.919698 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:56:22 crc kubenswrapper[5129]: E1211 16:56:22.789200 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:22 crc kubenswrapper[5129]: E1211 16:56:22.793364 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:22 crc kubenswrapper[5129]: E1211 16:56:22.795768 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:22 crc kubenswrapper[5129]: E1211 16:56:22.795809 5129 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 11 16:56:22 crc kubenswrapper[5129]: I1211 16:56:22.928360 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerStarted","Data":"b3050582618866f0b5a4f951c2f91b03b997b301cd420dd077c72c54b1c4c171"} Dec 11 16:56:22 crc kubenswrapper[5129]: I1211 16:56:22.929988 5129 generic.go:358] "Generic (PLEG): container finished" podID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerID="33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e" exitCode=0 Dec 11 16:56:22 crc kubenswrapper[5129]: I1211 16:56:22.930107 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcf8z" event={"ID":"2cc34f9f-085b-445c-b10d-e6241e66f722","Type":"ContainerDied","Data":"33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e"} Dec 11 16:56:22 crc kubenswrapper[5129]: I1211 16:56:22.933772 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" event={"ID":"771edef5-cdf9-463f-8fa5-824e3d0f0f0d","Type":"ContainerStarted","Data":"c0ad7fad2509048a88ed886cb637f4a9d49f8c063e2c428bb01e66e5b015fa1b"} Dec 11 16:56:22 crc kubenswrapper[5129]: I1211 16:56:22.934001 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:22 crc kubenswrapper[5129]: I1211 16:56:22.973626 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xjqrz" podStartSLOduration=17.973610793 podStartE2EDuration="17.973610793s" podCreationTimestamp="2025-12-11 16:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:22.972269862 +0000 UTC m=+126.775799889" watchObservedRunningTime="2025-12-11 16:56:22.973610793 +0000 UTC m=+126.777140810" Dec 11 16:56:22 crc kubenswrapper[5129]: I1211 16:56:22.997730 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" podStartSLOduration=106.997715951 podStartE2EDuration="1m46.997715951s" podCreationTimestamp="2025-12-11 16:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:56:22.996288656 +0000 UTC m=+126.799818703" watchObservedRunningTime="2025-12-11 16:56:22.997715951 +0000 UTC m=+126.801245958" Dec 11 16:56:23 crc kubenswrapper[5129]: I1211 16:56:23.830101 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xrzh8" Dec 11 16:56:23 crc kubenswrapper[5129]: I1211 16:56:23.940051 5129 generic.go:358] "Generic (PLEG): container finished" podID="836875ec-a9b9-41eb-9552-b8af7e552247" containerID="b3050582618866f0b5a4f951c2f91b03b997b301cd420dd077c72c54b1c4c171" exitCode=0 Dec 11 16:56:23 crc kubenswrapper[5129]: I1211 16:56:23.940264 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerDied","Data":"b3050582618866f0b5a4f951c2f91b03b997b301cd420dd077c72c54b1c4c171"} Dec 11 16:56:23 crc kubenswrapper[5129]: I1211 16:56:23.950624 5129 generic.go:358] "Generic (PLEG): container finished" podID="ea9bc8b1-7af5-4b15-b807-6a42f5405fc2" containerID="0bd957914336fc816765602478d432d9799f9d7a229c7cbffc48793a949f610c" exitCode=0 Dec 11 16:56:23 crc kubenswrapper[5129]: I1211 16:56:23.950661 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2","Type":"ContainerDied","Data":"0bd957914336fc816765602478d432d9799f9d7a229c7cbffc48793a949f610c"} Dec 11 16:56:25 crc kubenswrapper[5129]: I1211 16:56:25.370118 5129 ???:1] "http: TLS handshake error from 192.168.126.11:37420: no serving certificate available for the kubelet" Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.125446 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.181198 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kubelet-dir\") pod \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.181287 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kube-api-access\") pod \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\" (UID: \"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2\") " Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.181380 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea9bc8b1-7af5-4b15-b807-6a42f5405fc2" (UID: "ea9bc8b1-7af5-4b15-b807-6a42f5405fc2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.181877 5129 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.191323 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea9bc8b1-7af5-4b15-b807-6a42f5405fc2" (UID: "ea9bc8b1-7af5-4b15-b807-6a42f5405fc2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.283922 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea9bc8b1-7af5-4b15-b807-6a42f5405fc2-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.974133 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea9bc8b1-7af5-4b15-b807-6a42f5405fc2","Type":"ContainerDied","Data":"55106c32f307f67569784f4ad25b2b9ce94b9c2651f93d4c5fc5ecde0bfa04b2"} Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.974438 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55106c32f307f67569784f4ad25b2b9ce94b9c2651f93d4c5fc5ecde0bfa04b2" Dec 11 16:56:26 crc kubenswrapper[5129]: I1211 16:56:26.974546 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:56:27 crc kubenswrapper[5129]: I1211 16:56:27.183863 5129 patch_prober.go:28] interesting pod/console-64d44f6ddf-jhm42 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 11 16:56:27 crc kubenswrapper[5129]: I1211 16:56:27.183924 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-jhm42" podUID="83730945-5deb-4b14-988b-24d05e851543" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 11 16:56:27 crc kubenswrapper[5129]: E1211 16:56:27.272268 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b278cf_b87e_4f64_9619_748b8a89619d.slice/crio-3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8\": RecentStats: unable to find data in memory cache]" Dec 11 16:56:30 crc kubenswrapper[5129]: I1211 16:56:30.086797 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 16:56:32 crc kubenswrapper[5129]: E1211 16:56:32.788539 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:32 crc kubenswrapper[5129]: E1211 16:56:32.790182 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:32 crc kubenswrapper[5129]: E1211 16:56:32.791887 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:32 crc kubenswrapper[5129]: E1211 16:56:32.791940 5129 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 11 16:56:35 crc kubenswrapper[5129]: I1211 16:56:35.646215 5129 ???:1] "http: TLS handshake error from 192.168.126.11:46094: no serving certificate available for the kubelet" Dec 11 16:56:37 crc kubenswrapper[5129]: I1211 16:56:37.238352 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:37 crc kubenswrapper[5129]: I1211 16:56:37.243373 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-jhm42" Dec 11 16:56:37 crc kubenswrapper[5129]: E1211 16:56:37.402091 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b278cf_b87e_4f64_9619_748b8a89619d.slice/crio-3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8\": RecentStats: unable to find data in memory cache]" Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.051405 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq47r" event={"ID":"c524108b-2e35-4faa-9711-c13139f1321f","Type":"ContainerStarted","Data":"16e5a2111d7690a0f28812beaffbb0261dbc6ea3c06e857c3473889f94295c8c"} Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.053380 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wt5" event={"ID":"d9800ee9-8362-47af-ae1f-b4b2c91d08a1","Type":"ContainerStarted","Data":"a7416d9ea3d0647ea6d2295662803e3d66053ade0b7b0e40a4d1a494f39b3134"} Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.055075 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9tssw" event={"ID":"c6af231a-d33c-4f6f-8278-7aa1f6bc3635","Type":"ContainerStarted","Data":"ce07cac56e0bef9cde333933d6154a77c0022fb966a56e7bf7b91a8cdb7e12e1"} Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.056867 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerStarted","Data":"44aa459fccfa6a1223426d4ac080ed4a276ae14eb280b6e7f22c175167ccd6a6"} Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.059663 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjbrt" event={"ID":"55afdb67-75d7-4db9-bee0-95e43c4a07bd","Type":"ContainerStarted","Data":"e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a"} Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.062125 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcf8z" event={"ID":"2cc34f9f-085b-445c-b10d-e6241e66f722","Type":"ContainerStarted","Data":"b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34"} Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.064938 5129 generic.go:358] "Generic (PLEG): container finished" podID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerID="48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c" exitCode=0 Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.065075 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7khq8" event={"ID":"7e5898b2-33b2-465b-bf38-07d11c8f67f1","Type":"ContainerDied","Data":"48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c"} Dec 11 16:56:41 crc kubenswrapper[5129]: I1211 16:56:41.069084 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc2p6" event={"ID":"b51f4fcc-9be5-4925-b35e-75dca772e189","Type":"ContainerStarted","Data":"989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.074771 5129 generic.go:358] "Generic (PLEG): container finished" podID="c524108b-2e35-4faa-9711-c13139f1321f" containerID="16e5a2111d7690a0f28812beaffbb0261dbc6ea3c06e857c3473889f94295c8c" exitCode=0 Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.075117 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq47r" event={"ID":"c524108b-2e35-4faa-9711-c13139f1321f","Type":"ContainerDied","Data":"16e5a2111d7690a0f28812beaffbb0261dbc6ea3c06e857c3473889f94295c8c"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.079117 5129 generic.go:358] "Generic (PLEG): container finished" podID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerID="a7416d9ea3d0647ea6d2295662803e3d66053ade0b7b0e40a4d1a494f39b3134" exitCode=0 Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.079255 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wt5" event={"ID":"d9800ee9-8362-47af-ae1f-b4b2c91d08a1","Type":"ContainerDied","Data":"a7416d9ea3d0647ea6d2295662803e3d66053ade0b7b0e40a4d1a494f39b3134"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.082401 5129 generic.go:358] "Generic (PLEG): container finished" podID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerID="ce07cac56e0bef9cde333933d6154a77c0022fb966a56e7bf7b91a8cdb7e12e1" exitCode=0 Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.082484 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9tssw" event={"ID":"c6af231a-d33c-4f6f-8278-7aa1f6bc3635","Type":"ContainerDied","Data":"ce07cac56e0bef9cde333933d6154a77c0022fb966a56e7bf7b91a8cdb7e12e1"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.086271 5129 generic.go:358] "Generic (PLEG): container finished" podID="836875ec-a9b9-41eb-9552-b8af7e552247" containerID="44aa459fccfa6a1223426d4ac080ed4a276ae14eb280b6e7f22c175167ccd6a6" exitCode=0 Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.086320 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerDied","Data":"44aa459fccfa6a1223426d4ac080ed4a276ae14eb280b6e7f22c175167ccd6a6"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.088335 5129 generic.go:358] "Generic (PLEG): container finished" podID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerID="e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a" exitCode=0 Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.088417 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjbrt" event={"ID":"55afdb67-75d7-4db9-bee0-95e43c4a07bd","Type":"ContainerDied","Data":"e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.090458 5129 generic.go:358] "Generic (PLEG): container finished" podID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerID="b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34" exitCode=0 Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.090888 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcf8z" event={"ID":"2cc34f9f-085b-445c-b10d-e6241e66f722","Type":"ContainerDied","Data":"b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.102967 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7khq8" event={"ID":"7e5898b2-33b2-465b-bf38-07d11c8f67f1","Type":"ContainerStarted","Data":"0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.108181 5129 generic.go:358] "Generic (PLEG): container finished" podID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerID="989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5" exitCode=0 Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.108266 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc2p6" event={"ID":"b51f4fcc-9be5-4925-b35e-75dca772e189","Type":"ContainerDied","Data":"989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5"} Dec 11 16:56:42 crc kubenswrapper[5129]: I1211 16:56:42.240836 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7khq8" podStartSLOduration=4.809350321 podStartE2EDuration="26.240816535s" podCreationTimestamp="2025-12-11 16:56:16 +0000 UTC" firstStartedPulling="2025-12-11 16:56:18.84050434 +0000 UTC m=+122.644034357" lastFinishedPulling="2025-12-11 16:56:40.271970524 +0000 UTC m=+144.075500571" observedRunningTime="2025-12-11 16:56:42.236434649 +0000 UTC m=+146.039964676" watchObservedRunningTime="2025-12-11 16:56:42.240816535 +0000 UTC m=+146.044346552" Dec 11 16:56:42 crc kubenswrapper[5129]: E1211 16:56:42.788641 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:42 crc kubenswrapper[5129]: E1211 16:56:42.791775 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:42 crc kubenswrapper[5129]: E1211 16:56:42.793079 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:56:42 crc kubenswrapper[5129]: E1211 16:56:42.793148 5129 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.115222 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjbrt" event={"ID":"55afdb67-75d7-4db9-bee0-95e43c4a07bd","Type":"ContainerStarted","Data":"7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97"} Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.118684 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcf8z" event={"ID":"2cc34f9f-085b-445c-b10d-e6241e66f722","Type":"ContainerStarted","Data":"2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2"} Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.120636 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc2p6" event={"ID":"b51f4fcc-9be5-4925-b35e-75dca772e189","Type":"ContainerStarted","Data":"f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39"} Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.122410 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq47r" event={"ID":"c524108b-2e35-4faa-9711-c13139f1321f","Type":"ContainerStarted","Data":"a2051acdf53c776979c7f8fc425ce8f8976e4836eb3e205c2836165fb00582a4"} Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.128928 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wt5" event={"ID":"d9800ee9-8362-47af-ae1f-b4b2c91d08a1","Type":"ContainerStarted","Data":"719c31c82975fb84358c91082dde92642a7fc6a049fd4d94f211f52fbce5a91a"} Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.130961 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9tssw" event={"ID":"c6af231a-d33c-4f6f-8278-7aa1f6bc3635","Type":"ContainerStarted","Data":"c689a21229accd66335f557a46fb1eaa72b05bb7afd6fcf470fd283c9471b720"} Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.131402 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mjbrt" podStartSLOduration=8.922454468 podStartE2EDuration="29.131386875s" podCreationTimestamp="2025-12-11 16:56:14 +0000 UTC" firstStartedPulling="2025-12-11 16:56:16.742336188 +0000 UTC m=+120.545866205" lastFinishedPulling="2025-12-11 16:56:36.951268585 +0000 UTC m=+140.754798612" observedRunningTime="2025-12-11 16:56:43.129258328 +0000 UTC m=+146.932788345" watchObservedRunningTime="2025-12-11 16:56:43.131386875 +0000 UTC m=+146.934916892" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.132952 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerStarted","Data":"4bfa659623b33503616e776393d889d3801241ed18d6e6c64f4bd8684f528b41"} Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.153102 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gq47r" podStartSLOduration=5.68840405 podStartE2EDuration="28.153085913s" podCreationTimestamp="2025-12-11 16:56:15 +0000 UTC" firstStartedPulling="2025-12-11 16:56:17.790057032 +0000 UTC m=+121.593587049" lastFinishedPulling="2025-12-11 16:56:40.254738895 +0000 UTC m=+144.058268912" observedRunningTime="2025-12-11 16:56:43.150997878 +0000 UTC m=+146.954527895" watchObservedRunningTime="2025-12-11 16:56:43.153085913 +0000 UTC m=+146.956615920" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.165593 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n7wt5" podStartSLOduration=5.721638653 podStartE2EDuration="28.165576164s" podCreationTimestamp="2025-12-11 16:56:15 +0000 UTC" firstStartedPulling="2025-12-11 16:56:17.810342241 +0000 UTC m=+121.613872258" lastFinishedPulling="2025-12-11 16:56:40.254279712 +0000 UTC m=+144.057809769" observedRunningTime="2025-12-11 16:56:43.163890531 +0000 UTC m=+146.967420548" watchObservedRunningTime="2025-12-11 16:56:43.165576164 +0000 UTC m=+146.969106181" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.190183 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dcf8z" podStartSLOduration=7.826388257 podStartE2EDuration="25.190168473s" podCreationTimestamp="2025-12-11 16:56:18 +0000 UTC" firstStartedPulling="2025-12-11 16:56:22.93093023 +0000 UTC m=+126.734460247" lastFinishedPulling="2025-12-11 16:56:40.294710436 +0000 UTC m=+144.098240463" observedRunningTime="2025-12-11 16:56:43.187009115 +0000 UTC m=+146.990539132" watchObservedRunningTime="2025-12-11 16:56:43.190168473 +0000 UTC m=+146.993698490" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.207117 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nc2p6" podStartSLOduration=5.762488917 podStartE2EDuration="28.207102303s" podCreationTimestamp="2025-12-11 16:56:15 +0000 UTC" firstStartedPulling="2025-12-11 16:56:17.826803181 +0000 UTC m=+121.630333198" lastFinishedPulling="2025-12-11 16:56:40.271416557 +0000 UTC m=+144.074946584" observedRunningTime="2025-12-11 16:56:43.203009835 +0000 UTC m=+147.006539852" watchObservedRunningTime="2025-12-11 16:56:43.207102303 +0000 UTC m=+147.010632320" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.225745 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9tssw" podStartSLOduration=4.780043847 podStartE2EDuration="26.225731776s" podCreationTimestamp="2025-12-11 16:56:17 +0000 UTC" firstStartedPulling="2025-12-11 16:56:18.836419553 +0000 UTC m=+122.639949560" lastFinishedPulling="2025-12-11 16:56:40.282107472 +0000 UTC m=+144.085637489" observedRunningTime="2025-12-11 16:56:43.223231848 +0000 UTC m=+147.026761865" watchObservedRunningTime="2025-12-11 16:56:43.225731776 +0000 UTC m=+147.029261783" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.241873 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zvjfk" podStartSLOduration=8.879433419 podStartE2EDuration="25.24185465s" podCreationTimestamp="2025-12-11 16:56:18 +0000 UTC" firstStartedPulling="2025-12-11 16:56:23.941056759 +0000 UTC m=+127.744586776" lastFinishedPulling="2025-12-11 16:56:40.30347798 +0000 UTC m=+144.107008007" observedRunningTime="2025-12-11 16:56:43.238557447 +0000 UTC m=+147.042087474" watchObservedRunningTime="2025-12-11 16:56:43.24185465 +0000 UTC m=+147.045384667" Dec 11 16:56:43 crc kubenswrapper[5129]: I1211 16:56:43.956744 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.223431 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.224058 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.306872 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.592751 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.786688 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.787062 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.826312 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.828794 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.828826 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.846552 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-98qvs_048e4610-b9c6-4243-8a33-8c6156e3f025/kube-multus-additional-cni-plugins/0.log" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.846635 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.877673 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.883145 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.883863 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.898910 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/048e4610-b9c6-4243-8a33-8c6156e3f025-tuning-conf-dir\") pod \"048e4610-b9c6-4243-8a33-8c6156e3f025\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.899001 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/048e4610-b9c6-4243-8a33-8c6156e3f025-cni-sysctl-allowlist\") pod \"048e4610-b9c6-4243-8a33-8c6156e3f025\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.899071 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/048e4610-b9c6-4243-8a33-8c6156e3f025-ready\") pod \"048e4610-b9c6-4243-8a33-8c6156e3f025\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.899106 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmwqn\" (UniqueName: \"kubernetes.io/projected/048e4610-b9c6-4243-8a33-8c6156e3f025-kube-api-access-xmwqn\") pod \"048e4610-b9c6-4243-8a33-8c6156e3f025\" (UID: \"048e4610-b9c6-4243-8a33-8c6156e3f025\") " Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.899860 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/048e4610-b9c6-4243-8a33-8c6156e3f025-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "048e4610-b9c6-4243-8a33-8c6156e3f025" (UID: "048e4610-b9c6-4243-8a33-8c6156e3f025"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.900545 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/048e4610-b9c6-4243-8a33-8c6156e3f025-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "048e4610-b9c6-4243-8a33-8c6156e3f025" (UID: "048e4610-b9c6-4243-8a33-8c6156e3f025"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.901135 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/048e4610-b9c6-4243-8a33-8c6156e3f025-ready" (OuterVolumeSpecName: "ready") pod "048e4610-b9c6-4243-8a33-8c6156e3f025" (UID: "048e4610-b9c6-4243-8a33-8c6156e3f025"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.910764 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048e4610-b9c6-4243-8a33-8c6156e3f025-kube-api-access-xmwqn" (OuterVolumeSpecName: "kube-api-access-xmwqn") pod "048e4610-b9c6-4243-8a33-8c6156e3f025" (UID: "048e4610-b9c6-4243-8a33-8c6156e3f025"). InnerVolumeSpecName "kube-api-access-xmwqn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:56:45 crc kubenswrapper[5129]: I1211 16:56:45.931487 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:45.999994 5129 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/048e4610-b9c6-4243-8a33-8c6156e3f025-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.000031 5129 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/048e4610-b9c6-4243-8a33-8c6156e3f025-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.000045 5129 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/048e4610-b9c6-4243-8a33-8c6156e3f025-ready\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.000055 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmwqn\" (UniqueName: \"kubernetes.io/projected/048e4610-b9c6-4243-8a33-8c6156e3f025-kube-api-access-xmwqn\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.149910 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-98qvs_048e4610-b9c6-4243-8a33-8c6156e3f025/kube-multus-additional-cni-plugins/0.log" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.149977 5129 generic.go:358] "Generic (PLEG): container finished" podID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" exitCode=137 Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.150103 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" event={"ID":"048e4610-b9c6-4243-8a33-8c6156e3f025","Type":"ContainerDied","Data":"fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb"} Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.150145 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" event={"ID":"048e4610-b9c6-4243-8a33-8c6156e3f025","Type":"ContainerDied","Data":"7cecca7ddcbabbfe816bf13b17c39710c65960e010ca807260085cbc098d9556"} Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.150166 5129 scope.go:117] "RemoveContainer" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.150879 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-98qvs" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.166194 5129 scope.go:117] "RemoveContainer" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" Dec 11 16:56:46 crc kubenswrapper[5129]: E1211 16:56:46.166663 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb\": container with ID starting with fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb not found: ID does not exist" containerID="fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.166700 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb"} err="failed to get container status \"fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb\": rpc error: code = NotFound desc = could not find container \"fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb\": container with ID starting with fd9c313c950441111642676661a28a9147856124218c9e23f9123dbaef1dfebb not found: ID does not exist" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.177567 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-98qvs"] Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.182467 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-98qvs"] Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.527314 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" path="/var/lib/kubelet/pods/048e4610-b9c6-4243-8a33-8c6156e3f025/volumes" Dec 11 16:56:46 crc kubenswrapper[5129]: I1211 16:56:46.755681 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zkktb" Dec 11 16:56:47 crc kubenswrapper[5129]: I1211 16:56:47.204289 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:47 crc kubenswrapper[5129]: I1211 16:56:47.419743 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:47 crc kubenswrapper[5129]: I1211 16:56:47.420166 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:47 crc kubenswrapper[5129]: I1211 16:56:47.468099 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:47 crc kubenswrapper[5129]: E1211 16:56:47.525919 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b278cf_b87e_4f64_9619_748b8a89619d.slice/crio-3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8\": RecentStats: unable to find data in memory cache]" Dec 11 16:56:47 crc kubenswrapper[5129]: I1211 16:56:47.625953 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:47 crc kubenswrapper[5129]: I1211 16:56:47.626026 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:47 crc kubenswrapper[5129]: I1211 16:56:47.664985 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.202239 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.212322 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.288059 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq47r"] Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.395921 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.396176 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.451446 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.802900 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.803304 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:48 crc kubenswrapper[5129]: I1211 16:56:48.844943 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:49 crc kubenswrapper[5129]: I1211 16:56:49.175898 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gq47r" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="registry-server" containerID="cri-o://a2051acdf53c776979c7f8fc425ce8f8976e4836eb3e205c2836165fb00582a4" gracePeriod=2 Dec 11 16:56:49 crc kubenswrapper[5129]: I1211 16:56:49.209607 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:56:49 crc kubenswrapper[5129]: I1211 16:56:49.210970 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:50 crc kubenswrapper[5129]: I1211 16:56:50.029624 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-glzzm"] Dec 11 16:56:50 crc kubenswrapper[5129]: I1211 16:56:50.688040 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9tssw"] Dec 11 16:56:50 crc kubenswrapper[5129]: I1211 16:56:50.688547 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9tssw" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="registry-server" containerID="cri-o://c689a21229accd66335f557a46fb1eaa72b05bb7afd6fcf470fd283c9471b720" gracePeriod=2 Dec 11 16:56:53 crc kubenswrapper[5129]: I1211 16:56:53.088275 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvjfk"] Dec 11 16:56:53 crc kubenswrapper[5129]: I1211 16:56:53.088584 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zvjfk" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="registry-server" containerID="cri-o://4bfa659623b33503616e776393d889d3801241ed18d6e6c64f4bd8684f528b41" gracePeriod=2 Dec 11 16:56:53 crc kubenswrapper[5129]: I1211 16:56:53.195260 5129 generic.go:358] "Generic (PLEG): container finished" podID="c524108b-2e35-4faa-9711-c13139f1321f" containerID="a2051acdf53c776979c7f8fc425ce8f8976e4836eb3e205c2836165fb00582a4" exitCode=0 Dec 11 16:56:53 crc kubenswrapper[5129]: I1211 16:56:53.195305 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq47r" event={"ID":"c524108b-2e35-4faa-9711-c13139f1321f","Type":"ContainerDied","Data":"a2051acdf53c776979c7f8fc425ce8f8976e4836eb3e205c2836165fb00582a4"} Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.202793 5129 generic.go:358] "Generic (PLEG): container finished" podID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerID="c689a21229accd66335f557a46fb1eaa72b05bb7afd6fcf470fd283c9471b720" exitCode=0 Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.202870 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9tssw" event={"ID":"c6af231a-d33c-4f6f-8278-7aa1f6bc3635","Type":"ContainerDied","Data":"c689a21229accd66335f557a46fb1eaa72b05bb7afd6fcf470fd283c9471b720"} Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.339495 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.342874 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea9bc8b1-7af5-4b15-b807-6a42f5405fc2" containerName="pruner" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.342899 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9bc8b1-7af5-4b15-b807-6a42f5405fc2" containerName="pruner" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.342932 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerName="kube-multus-additional-cni-plugins" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.342939 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerName="kube-multus-additional-cni-plugins" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.342967 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d152f2f-4642-428b-b6da-7cc4f687eb71" containerName="pruner" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.342974 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d152f2f-4642-428b-b6da-7cc4f687eb71" containerName="pruner" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.343100 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d152f2f-4642-428b-b6da-7cc4f687eb71" containerName="pruner" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.343113 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea9bc8b1-7af5-4b15-b807-6a42f5405fc2" containerName="pruner" Dec 11 16:56:54 crc kubenswrapper[5129]: I1211 16:56:54.343130 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="048e4610-b9c6-4243-8a33-8c6156e3f025" containerName="kube-multus-additional-cni-plugins" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.133762 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.223363 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-catalog-content\") pod \"c524108b-2e35-4faa-9711-c13139f1321f\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.223475 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-utilities\") pod \"c524108b-2e35-4faa-9711-c13139f1321f\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.223583 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfm9s\" (UniqueName: \"kubernetes.io/projected/c524108b-2e35-4faa-9711-c13139f1321f-kube-api-access-lfm9s\") pod \"c524108b-2e35-4faa-9711-c13139f1321f\" (UID: \"c524108b-2e35-4faa-9711-c13139f1321f\") " Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.224527 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-utilities" (OuterVolumeSpecName: "utilities") pod "c524108b-2e35-4faa-9711-c13139f1321f" (UID: "c524108b-2e35-4faa-9711-c13139f1321f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.224779 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.243589 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c524108b-2e35-4faa-9711-c13139f1321f-kube-api-access-lfm9s" (OuterVolumeSpecName: "kube-api-access-lfm9s") pod "c524108b-2e35-4faa-9711-c13139f1321f" (UID: "c524108b-2e35-4faa-9711-c13139f1321f"). InnerVolumeSpecName "kube-api-access-lfm9s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.279272 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c524108b-2e35-4faa-9711-c13139f1321f" (UID: "c524108b-2e35-4faa-9711-c13139f1321f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.326366 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lfm9s\" (UniqueName: \"kubernetes.io/projected/c524108b-2e35-4faa-9711-c13139f1321f-kube-api-access-lfm9s\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.326401 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c524108b-2e35-4faa-9711-c13139f1321f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.610016 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.654504 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq47r" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.654856 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.658140 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.658415 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.709988 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.710025 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq47r" event={"ID":"c524108b-2e35-4faa-9711-c13139f1321f","Type":"ContainerDied","Data":"1a87bb511b417e3908be82c211160e4751c447f958d29f8689b73c4a1963fa05"} Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.710081 5129 scope.go:117] "RemoveContainer" containerID="a2051acdf53c776979c7f8fc425ce8f8976e4836eb3e205c2836165fb00582a4" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.729905 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq47r"] Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.731226 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-utilities\") pod \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.731348 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7j8p\" (UniqueName: \"kubernetes.io/projected/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-kube-api-access-m7j8p\") pod \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.731461 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-catalog-content\") pod \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\" (UID: \"c6af231a-d33c-4f6f-8278-7aa1f6bc3635\") " Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.731694 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.731753 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.732431 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gq47r"] Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.732889 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-utilities" (OuterVolumeSpecName: "utilities") pod "c6af231a-d33c-4f6f-8278-7aa1f6bc3635" (UID: "c6af231a-d33c-4f6f-8278-7aa1f6bc3635"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.734121 5129 scope.go:117] "RemoveContainer" containerID="16e5a2111d7690a0f28812beaffbb0261dbc6ea3c06e857c3473889f94295c8c" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.738310 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-kube-api-access-m7j8p" (OuterVolumeSpecName: "kube-api-access-m7j8p") pod "c6af231a-d33c-4f6f-8278-7aa1f6bc3635" (UID: "c6af231a-d33c-4f6f-8278-7aa1f6bc3635"). InnerVolumeSpecName "kube-api-access-m7j8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.747015 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6af231a-d33c-4f6f-8278-7aa1f6bc3635" (UID: "c6af231a-d33c-4f6f-8278-7aa1f6bc3635"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.747677 5129 scope.go:117] "RemoveContainer" containerID="9682125c0bb0a473b1ffd1c7e720d12c01a30cc1ef7db47355151b3a0b85e51f" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.832623 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.832994 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.833102 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.833181 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.833242 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m7j8p\" (UniqueName: \"kubernetes.io/projected/c6af231a-d33c-4f6f-8278-7aa1f6bc3635-kube-api-access-m7j8p\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.832797 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:55 crc kubenswrapper[5129]: I1211 16:56:55.848445 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.024769 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.149784 5129 ???:1] "http: TLS handshake error from 192.168.126.11:32840: no serving certificate available for the kubelet" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.192357 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.211777 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.235240 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9tssw" event={"ID":"c6af231a-d33c-4f6f-8278-7aa1f6bc3635","Type":"ContainerDied","Data":"ffcc277da8f6a01992a22397af949bd3ea4fddd3febeadffd790bb4a090755be"} Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.235286 5129 scope.go:117] "RemoveContainer" containerID="c689a21229accd66335f557a46fb1eaa72b05bb7afd6fcf470fd283c9471b720" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.235436 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9tssw" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.248038 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.267457 5129 generic.go:358] "Generic (PLEG): container finished" podID="836875ec-a9b9-41eb-9552-b8af7e552247" containerID="4bfa659623b33503616e776393d889d3801241ed18d6e6c64f4bd8684f528b41" exitCode=0 Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.267711 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerDied","Data":"4bfa659623b33503616e776393d889d3801241ed18d6e6c64f4bd8684f528b41"} Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.275106 5129 scope.go:117] "RemoveContainer" containerID="ce07cac56e0bef9cde333933d6154a77c0022fb966a56e7bf7b91a8cdb7e12e1" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.288613 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9tssw"] Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.292826 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9tssw"] Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.311280 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.312028 5129 scope.go:117] "RemoveContainer" containerID="468901ef9253541e6510cb76e65a92006ef39bf97ba30900da568191786fd827" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.526765 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c524108b-2e35-4faa-9711-c13139f1321f" path="/var/lib/kubelet/pods/c524108b-2e35-4faa-9711-c13139f1321f/volumes" Dec 11 16:56:56 crc kubenswrapper[5129]: I1211 16:56:56.527840 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" path="/var/lib/kubelet/pods/c6af231a-d33c-4f6f-8278-7aa1f6bc3635/volumes" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.239880 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.287298 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvjfk" event={"ID":"836875ec-a9b9-41eb-9552-b8af7e552247","Type":"ContainerDied","Data":"65a5d953619923ec8a5b2e0ace035e84fb5bb9a48dbe2bfb41a5bf8465ac640b"} Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.287864 5129 scope.go:117] "RemoveContainer" containerID="4bfa659623b33503616e776393d889d3801241ed18d6e6c64f4bd8684f528b41" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.287316 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvjfk" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.288557 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"94f25da4-8aa7-45de-8cf9-6209d948b9d5","Type":"ContainerStarted","Data":"d36ff48695a6abae3430afbac45900f14c3da836ce8e3df85101ca8ccc2266e4"} Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.313889 5129 scope.go:117] "RemoveContainer" containerID="44aa459fccfa6a1223426d4ac080ed4a276ae14eb280b6e7f22c175167ccd6a6" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.338661 5129 scope.go:117] "RemoveContainer" containerID="b3050582618866f0b5a4f951c2f91b03b997b301cd420dd077c72c54b1c4c171" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.361848 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-catalog-content\") pod \"836875ec-a9b9-41eb-9552-b8af7e552247\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.362100 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md8rp\" (UniqueName: \"kubernetes.io/projected/836875ec-a9b9-41eb-9552-b8af7e552247-kube-api-access-md8rp\") pod \"836875ec-a9b9-41eb-9552-b8af7e552247\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.362185 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-utilities\") pod \"836875ec-a9b9-41eb-9552-b8af7e552247\" (UID: \"836875ec-a9b9-41eb-9552-b8af7e552247\") " Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.363436 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-utilities" (OuterVolumeSpecName: "utilities") pod "836875ec-a9b9-41eb-9552-b8af7e552247" (UID: "836875ec-a9b9-41eb-9552-b8af7e552247"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.373590 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836875ec-a9b9-41eb-9552-b8af7e552247-kube-api-access-md8rp" (OuterVolumeSpecName: "kube-api-access-md8rp") pod "836875ec-a9b9-41eb-9552-b8af7e552247" (UID: "836875ec-a9b9-41eb-9552-b8af7e552247"). InnerVolumeSpecName "kube-api-access-md8rp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.463625 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-md8rp\" (UniqueName: \"kubernetes.io/projected/836875ec-a9b9-41eb-9552-b8af7e552247-kube-api-access-md8rp\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:57 crc kubenswrapper[5129]: I1211 16:56:57.463656 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:57 crc kubenswrapper[5129]: E1211 16:56:57.661729 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b278cf_b87e_4f64_9619_748b8a89619d.slice/crio-3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8\": RecentStats: unable to find data in memory cache]" Dec 11 16:56:59 crc kubenswrapper[5129]: I1211 16:56:59.289284 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7wt5"] Dec 11 16:56:59 crc kubenswrapper[5129]: I1211 16:56:59.289620 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n7wt5" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="registry-server" containerID="cri-o://719c31c82975fb84358c91082dde92642a7fc6a049fd4d94f211f52fbce5a91a" gracePeriod=2 Dec 11 16:56:59 crc kubenswrapper[5129]: I1211 16:56:59.309161 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "836875ec-a9b9-41eb-9552-b8af7e552247" (UID: "836875ec-a9b9-41eb-9552-b8af7e552247"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:56:59 crc kubenswrapper[5129]: I1211 16:56:59.314594 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"94f25da4-8aa7-45de-8cf9-6209d948b9d5","Type":"ContainerStarted","Data":"fb6d1e3e3cee077ee3577a92a3a869f6fb4353ede5833d94570f85c70342f649"} Dec 11 16:56:59 crc kubenswrapper[5129]: I1211 16:56:59.388486 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836875ec-a9b9-41eb-9552-b8af7e552247-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:56:59 crc kubenswrapper[5129]: I1211 16:56:59.410648 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvjfk"] Dec 11 16:56:59 crc kubenswrapper[5129]: I1211 16:56:59.414809 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zvjfk"] Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.322069 5129 generic.go:358] "Generic (PLEG): container finished" podID="94f25da4-8aa7-45de-8cf9-6209d948b9d5" containerID="fb6d1e3e3cee077ee3577a92a3a869f6fb4353ede5833d94570f85c70342f649" exitCode=0 Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.322252 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"94f25da4-8aa7-45de-8cf9-6209d948b9d5","Type":"ContainerDied","Data":"fb6d1e3e3cee077ee3577a92a3a869f6fb4353ede5833d94570f85c70342f649"} Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.328134 5129 generic.go:358] "Generic (PLEG): container finished" podID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerID="719c31c82975fb84358c91082dde92642a7fc6a049fd4d94f211f52fbce5a91a" exitCode=0 Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.328184 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wt5" event={"ID":"d9800ee9-8362-47af-ae1f-b4b2c91d08a1","Type":"ContainerDied","Data":"719c31c82975fb84358c91082dde92642a7fc6a049fd4d94f211f52fbce5a91a"} Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.527434 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" path="/var/lib/kubelet/pods/836875ec-a9b9-41eb-9552-b8af7e552247/volumes" Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.694357 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.802667 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmhz4\" (UniqueName: \"kubernetes.io/projected/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-kube-api-access-wmhz4\") pod \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.802742 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-catalog-content\") pod \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.802781 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-utilities\") pod \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\" (UID: \"d9800ee9-8362-47af-ae1f-b4b2c91d08a1\") " Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.803801 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-utilities" (OuterVolumeSpecName: "utilities") pod "d9800ee9-8362-47af-ae1f-b4b2c91d08a1" (UID: "d9800ee9-8362-47af-ae1f-b4b2c91d08a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.811281 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-kube-api-access-wmhz4" (OuterVolumeSpecName: "kube-api-access-wmhz4") pod "d9800ee9-8362-47af-ae1f-b4b2c91d08a1" (UID: "d9800ee9-8362-47af-ae1f-b4b2c91d08a1"). InnerVolumeSpecName "kube-api-access-wmhz4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.833932 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9800ee9-8362-47af-ae1f-b4b2c91d08a1" (UID: "d9800ee9-8362-47af-ae1f-b4b2c91d08a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.903695 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wmhz4\" (UniqueName: \"kubernetes.io/projected/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-kube-api-access-wmhz4\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.903742 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:00 crc kubenswrapper[5129]: I1211 16:57:00.903756 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9800ee9-8362-47af-ae1f-b4b2c91d08a1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.337543 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7wt5" event={"ID":"d9800ee9-8362-47af-ae1f-b4b2c91d08a1","Type":"ContainerDied","Data":"29b9a519fa9aeac863d1b530179617669eab440bfc441aabf904deebe211771a"} Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.337603 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7wt5" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.337623 5129 scope.go:117] "RemoveContainer" containerID="719c31c82975fb84358c91082dde92642a7fc6a049fd4d94f211f52fbce5a91a" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.357802 5129 scope.go:117] "RemoveContainer" containerID="a7416d9ea3d0647ea6d2295662803e3d66053ade0b7b0e40a4d1a494f39b3134" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.375694 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7wt5"] Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.378737 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n7wt5"] Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.399695 5129 scope.go:117] "RemoveContainer" containerID="39f593f8fdc8a53347077d0a9692c3ffacc85badeaf9e7734ea92aeba7768069" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.538124 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.613836 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kube-api-access\") pod \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.613918 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kubelet-dir\") pod \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\" (UID: \"94f25da4-8aa7-45de-8cf9-6209d948b9d5\") " Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.614123 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "94f25da4-8aa7-45de-8cf9-6209d948b9d5" (UID: "94f25da4-8aa7-45de-8cf9-6209d948b9d5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.618193 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "94f25da4-8aa7-45de-8cf9-6209d948b9d5" (UID: "94f25da4-8aa7-45de-8cf9-6209d948b9d5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.715039 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:01 crc kubenswrapper[5129]: I1211 16:57:01.715071 5129 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94f25da4-8aa7-45de-8cf9-6209d948b9d5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.344992 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.345016 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"94f25da4-8aa7-45de-8cf9-6209d948b9d5","Type":"ContainerDied","Data":"d36ff48695a6abae3430afbac45900f14c3da836ce8e3df85101ca8ccc2266e4"} Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.345063 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d36ff48695a6abae3430afbac45900f14c3da836ce8e3df85101ca8ccc2266e4" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.527588 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" path="/var/lib/kubelet/pods/d9800ee9-8362-47af-ae1f-b4b2c91d08a1/volumes" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.936353 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.937779 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.937868 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.937934 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.937987 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938050 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938113 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938167 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938218 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938281 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938331 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938389 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938444 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938497 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938570 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938625 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938677 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938729 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938785 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938839 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94f25da4-8aa7-45de-8cf9-6209d948b9d5" containerName="pruner" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938895 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f25da4-8aa7-45de-8cf9-6209d948b9d5" containerName="pruner" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.938957 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939010 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="extract-utilities" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939067 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939118 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939171 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939222 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="extract-content" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939353 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9800ee9-8362-47af-ae1f-b4b2c91d08a1" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939418 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="94f25da4-8aa7-45de-8cf9-6209d948b9d5" containerName="pruner" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939479 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="c6af231a-d33c-4f6f-8278-7aa1f6bc3635" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939567 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="c524108b-2e35-4faa-9711-c13139f1321f" containerName="registry-server" Dec 11 16:57:02 crc kubenswrapper[5129]: I1211 16:57:02.939645 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="836875ec-a9b9-41eb-9552-b8af7e552247" containerName="registry-server" Dec 11 16:57:03 crc kubenswrapper[5129]: I1211 16:57:03.945796 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 11 16:57:03 crc kubenswrapper[5129]: I1211 16:57:03.946033 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:03 crc kubenswrapper[5129]: I1211 16:57:03.949066 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:57:03 crc kubenswrapper[5129]: I1211 16:57:03.949502 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.060227 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-var-lock\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.060271 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-kubelet-dir\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.060371 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/354124a4-f72e-48af-b7ae-77e8990c6c47-kube-api-access\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.162112 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-var-lock\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.162204 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-kubelet-dir\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.162328 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/354124a4-f72e-48af-b7ae-77e8990c6c47-kube-api-access\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.162613 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-var-lock\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.162681 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-kubelet-dir\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.180316 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/354124a4-f72e-48af-b7ae-77e8990c6c47-kube-api-access\") pod \"installer-12-crc\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.263618 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:04 crc kubenswrapper[5129]: I1211 16:57:04.686087 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 11 16:57:05 crc kubenswrapper[5129]: I1211 16:57:05.370274 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"354124a4-f72e-48af-b7ae-77e8990c6c47","Type":"ContainerStarted","Data":"ad2c70d189a890c3d39500bb9f8eed6fe83eb615e7fabff45befd2a7bd73b08a"} Dec 11 16:57:05 crc kubenswrapper[5129]: I1211 16:57:05.370733 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"354124a4-f72e-48af-b7ae-77e8990c6c47","Type":"ContainerStarted","Data":"4523693eea9a99cad820da4b38f46e7e20db14421635c83e9b9fb952fe00a65c"} Dec 11 16:57:05 crc kubenswrapper[5129]: I1211 16:57:05.389576 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=3.3895505 podStartE2EDuration="3.3895505s" podCreationTimestamp="2025-12-11 16:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:57:05.385101832 +0000 UTC m=+169.188631859" watchObservedRunningTime="2025-12-11 16:57:05.3895505 +0000 UTC m=+169.193080547" Dec 11 16:57:07 crc kubenswrapper[5129]: E1211 16:57:07.781646 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b278cf_b87e_4f64_9619_748b8a89619d.slice/crio-3cc45f4cc97bc071b71bde25e50cb9366bad2e6ba93f4afac5f0dbf0e7759da8\": RecentStats: unable to find data in memory cache]" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.066017 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" podUID="40ca3ab4-d0e2-45dd-896c-d688cfc10b10" containerName="oauth-openshift" containerID="cri-o://7192e168ee4bf907a85a42399f2fd8be30b89bc4eb15cf74b2861f656c2896db" gracePeriod=15 Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.446685 5129 generic.go:358] "Generic (PLEG): container finished" podID="40ca3ab4-d0e2-45dd-896c-d688cfc10b10" containerID="7192e168ee4bf907a85a42399f2fd8be30b89bc4eb15cf74b2861f656c2896db" exitCode=0 Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.446830 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" event={"ID":"40ca3ab4-d0e2-45dd-896c-d688cfc10b10","Type":"ContainerDied","Data":"7192e168ee4bf907a85a42399f2fd8be30b89bc4eb15cf74b2861f656c2896db"} Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.546548 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.586072 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b75ff674b-2fb6g"] Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.586722 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40ca3ab4-d0e2-45dd-896c-d688cfc10b10" containerName="oauth-openshift" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.586740 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ca3ab4-d0e2-45dd-896c-d688cfc10b10" containerName="oauth-openshift" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.586831 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="40ca3ab4-d0e2-45dd-896c-d688cfc10b10" containerName="oauth-openshift" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.612643 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b75ff674b-2fb6g"] Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.612722 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.622932 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-idp-0-file-data\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623027 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-dir\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623077 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-cliconfig\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623155 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-ocp-branding-template\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623194 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-session\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623228 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-provider-selection\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623291 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gfrn\" (UniqueName: \"kubernetes.io/projected/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-kube-api-access-2gfrn\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623342 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-policies\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623386 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-serving-cert\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623455 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-login\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623497 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623825 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-error\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.623957 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-router-certs\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.624167 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-trusted-ca-bundle\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.624215 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-service-ca\") pod \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\" (UID: \"40ca3ab4-d0e2-45dd-896c-d688cfc10b10\") " Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.624221 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.626462 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.627328 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.633909 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-kube-api-access-2gfrn" (OuterVolumeSpecName: "kube-api-access-2gfrn") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "kube-api-access-2gfrn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.633937 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.634207 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.634412 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.634742 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.635060 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.635294 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.636285 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.637101 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.626423 5129 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.638467 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.639640 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "40ca3ab4-d0e2-45dd-896c-d688cfc10b10" (UID: "40ca3ab4-d0e2-45dd-896c-d688cfc10b10"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740080 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740223 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740354 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740435 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-session\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740504 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740544 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740664 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-audit-policies\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740714 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-error\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740808 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740834 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740860 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740892 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-login\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740914 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36168849-5153-4dd7-b68d-049d54baa1f8-audit-dir\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740946 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd4dt\" (UniqueName: \"kubernetes.io/projected/36168849-5153-4dd7-b68d-049d54baa1f8-kube-api-access-dd4dt\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.740988 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741006 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741016 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741026 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gfrn\" (UniqueName: \"kubernetes.io/projected/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-kube-api-access-2gfrn\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741035 5129 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741044 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741053 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741062 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741071 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741080 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741090 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.741099 5129 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40ca3ab4-d0e2-45dd-896c-d688cfc10b10-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.842782 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-session\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.843265 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.843300 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844456 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-audit-policies\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844499 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-error\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844634 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844664 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844691 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844721 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-login\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844750 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36168849-5153-4dd7-b68d-049d54baa1f8-audit-dir\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844781 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4dt\" (UniqueName: \"kubernetes.io/projected/36168849-5153-4dd7-b68d-049d54baa1f8-kube-api-access-dd4dt\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844825 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844829 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.844993 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.845045 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.845922 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-audit-policies\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.846124 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36168849-5153-4dd7-b68d-049d54baa1f8-audit-dir\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.846156 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.846413 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.848631 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.848680 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.848787 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-error\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.848852 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-session\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.848864 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.850624 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.850628 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-user-template-login\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.852576 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/36168849-5153-4dd7-b68d-049d54baa1f8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.870605 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4dt\" (UniqueName: \"kubernetes.io/projected/36168849-5153-4dd7-b68d-049d54baa1f8-kube-api-access-dd4dt\") pod \"oauth-openshift-6b75ff674b-2fb6g\" (UID: \"36168849-5153-4dd7-b68d-049d54baa1f8\") " pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:15 crc kubenswrapper[5129]: I1211 16:57:15.973939 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.183542 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b75ff674b-2fb6g"] Dec 11 16:57:16 crc kubenswrapper[5129]: W1211 16:57:16.206382 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36168849_5153_4dd7_b68d_049d54baa1f8.slice/crio-3f3559b24d7abf2831a4902f51b017e7a1cb00e059ce121900f3d834d9ecbb48 WatchSource:0}: Error finding container 3f3559b24d7abf2831a4902f51b017e7a1cb00e059ce121900f3d834d9ecbb48: Status 404 returned error can't find the container with id 3f3559b24d7abf2831a4902f51b017e7a1cb00e059ce121900f3d834d9ecbb48 Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.455537 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" event={"ID":"36168849-5153-4dd7-b68d-049d54baa1f8","Type":"ContainerStarted","Data":"3f3559b24d7abf2831a4902f51b017e7a1cb00e059ce121900f3d834d9ecbb48"} Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.458872 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" event={"ID":"40ca3ab4-d0e2-45dd-896c-d688cfc10b10","Type":"ContainerDied","Data":"30224e3ba4cf648f9135dc8048922a5d4d6bc4f601ad6f5c37ca59b58314197b"} Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.458907 5129 scope.go:117] "RemoveContainer" containerID="7192e168ee4bf907a85a42399f2fd8be30b89bc4eb15cf74b2861f656c2896db" Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.459006 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-glzzm" Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.490504 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-glzzm"] Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.491749 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-glzzm"] Dec 11 16:57:16 crc kubenswrapper[5129]: I1211 16:57:16.530273 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ca3ab4-d0e2-45dd-896c-d688cfc10b10" path="/var/lib/kubelet/pods/40ca3ab4-d0e2-45dd-896c-d688cfc10b10/volumes" Dec 11 16:57:17 crc kubenswrapper[5129]: I1211 16:57:17.468231 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" event={"ID":"36168849-5153-4dd7-b68d-049d54baa1f8","Type":"ContainerStarted","Data":"50575c26da1fb972192a57568cfb4e2df53433c314ef447766187b1a53c74af4"} Dec 11 16:57:17 crc kubenswrapper[5129]: I1211 16:57:17.469650 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:17 crc kubenswrapper[5129]: I1211 16:57:17.476431 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" Dec 11 16:57:17 crc kubenswrapper[5129]: I1211 16:57:17.498716 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b75ff674b-2fb6g" podStartSLOduration=27.498691074 podStartE2EDuration="27.498691074s" podCreationTimestamp="2025-12-11 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:57:17.494961787 +0000 UTC m=+181.298491844" watchObservedRunningTime="2025-12-11 16:57:17.498691074 +0000 UTC m=+181.302221101" Dec 11 16:57:37 crc kubenswrapper[5129]: I1211 16:57:37.137653 5129 ???:1] "http: TLS handshake error from 192.168.126.11:34868: no serving certificate available for the kubelet" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.982552 5129 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.983160 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016" gracePeriod=15 Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.983277 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe" gracePeriod=15 Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.983331 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4" gracePeriod=15 Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.983251 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9" gracePeriod=15 Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.983399 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03" gracePeriod=15 Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.986733 5129 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987451 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987475 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987489 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987498 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987541 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987550 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987565 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987573 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987587 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987594 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987604 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987612 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987620 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987627 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987637 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987644 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987655 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987662 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987688 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987699 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987861 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987877 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987887 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987900 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987910 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987921 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987931 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.987941 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 16:57:42 crc kubenswrapper[5129]: I1211 16:57:42.988237 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.002448 5129 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.010634 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.019311 5129 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.042827 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: E1211 16:57:43.043465 5129 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142471 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142568 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142700 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142756 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142778 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142845 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142871 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142921 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.142946 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.143037 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244076 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244118 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244135 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244249 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244315 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244275 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244368 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244331 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244404 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244452 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244484 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244546 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244606 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244656 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244785 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244839 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244862 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244898 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244904 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.244998 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.344287 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:43 crc kubenswrapper[5129]: E1211 16:57:43.364402 5129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.18:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188037ab6e9ac3b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:57:43.363806135 +0000 UTC m=+207.167336162,LastTimestamp:2025-12-11 16:57:43.363806135 +0000 UTC m=+207.167336162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.652430 5129 generic.go:358] "Generic (PLEG): container finished" podID="354124a4-f72e-48af-b7ae-77e8990c6c47" containerID="ad2c70d189a890c3d39500bb9f8eed6fe83eb615e7fabff45befd2a7bd73b08a" exitCode=0 Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.652581 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"354124a4-f72e-48af-b7ae-77e8990c6c47","Type":"ContainerDied","Data":"ad2c70d189a890c3d39500bb9f8eed6fe83eb615e7fabff45befd2a7bd73b08a"} Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.653272 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.655026 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"ec8ebad7e8a587b0c09f5516d862d26b06dcfc9111d947a9814fb0afb6c55e6a"} Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.657549 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.658926 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.660294 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9" exitCode=0 Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.660325 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4" exitCode=0 Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.660334 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03" exitCode=0 Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.660342 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe" exitCode=2 Dec 11 16:57:43 crc kubenswrapper[5129]: I1211 16:57:43.660458 5129 scope.go:117] "RemoveContainer" containerID="d42aa40d17992138d3fdd6423aeb398a77fefba55fa96a5277831e61825eef45" Dec 11 16:57:44 crc kubenswrapper[5129]: I1211 16:57:44.676132 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"87a7806e899c771a4ebecc6158fc742e2ab0165843b3f54119521cc10ff9cac6"} Dec 11 16:57:44 crc kubenswrapper[5129]: I1211 16:57:44.676405 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:44 crc kubenswrapper[5129]: I1211 16:57:44.677009 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:44 crc kubenswrapper[5129]: E1211 16:57:44.677094 5129 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:44 crc kubenswrapper[5129]: I1211 16:57:44.679869 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:57:44 crc kubenswrapper[5129]: I1211 16:57:44.918594 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:44 crc kubenswrapper[5129]: I1211 16:57:44.919469 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.074757 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/354124a4-f72e-48af-b7ae-77e8990c6c47-kube-api-access\") pod \"354124a4-f72e-48af-b7ae-77e8990c6c47\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.074831 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-kubelet-dir\") pod \"354124a4-f72e-48af-b7ae-77e8990c6c47\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.074928 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-var-lock\") pod \"354124a4-f72e-48af-b7ae-77e8990c6c47\" (UID: \"354124a4-f72e-48af-b7ae-77e8990c6c47\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.075411 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-var-lock" (OuterVolumeSpecName: "var-lock") pod "354124a4-f72e-48af-b7ae-77e8990c6c47" (UID: "354124a4-f72e-48af-b7ae-77e8990c6c47"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.076337 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "354124a4-f72e-48af-b7ae-77e8990c6c47" (UID: "354124a4-f72e-48af-b7ae-77e8990c6c47"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.103722 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354124a4-f72e-48af-b7ae-77e8990c6c47-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "354124a4-f72e-48af-b7ae-77e8990c6c47" (UID: "354124a4-f72e-48af-b7ae-77e8990c6c47"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.176883 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/354124a4-f72e-48af-b7ae-77e8990c6c47-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.176917 5129 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.176925 5129 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/354124a4-f72e-48af-b7ae-77e8990c6c47-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.350692 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.351703 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.352182 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.352432 5129 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.480691 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.480743 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.480764 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.480822 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.480885 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.481146 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.481160 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.481187 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.481905 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.483765 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.582604 5129 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.582654 5129 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.582665 5129 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.582676 5129 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.582686 5129 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.690096 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.690137 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"354124a4-f72e-48af-b7ae-77e8990c6c47","Type":"ContainerDied","Data":"4523693eea9a99cad820da4b38f46e7e20db14421635c83e9b9fb952fe00a65c"} Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.690181 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4523693eea9a99cad820da4b38f46e7e20db14421635c83e9b9fb952fe00a65c" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.693148 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.693867 5129 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016" exitCode=0 Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.693983 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.694007 5129 scope.go:117] "RemoveContainer" containerID="43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.694227 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:45 crc kubenswrapper[5129]: E1211 16:57:45.694736 5129 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.709062 5129 scope.go:117] "RemoveContainer" containerID="dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.714138 5129 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.715707 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.716198 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.716595 5129 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.721074 5129 scope.go:117] "RemoveContainer" containerID="0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.732869 5129 scope.go:117] "RemoveContainer" containerID="1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.747269 5129 scope.go:117] "RemoveContainer" containerID="4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.761420 5129 scope.go:117] "RemoveContainer" containerID="8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.831134 5129 scope.go:117] "RemoveContainer" containerID="43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9" Dec 11 16:57:45 crc kubenswrapper[5129]: E1211 16:57:45.831498 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9\": container with ID starting with 43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9 not found: ID does not exist" containerID="43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.831554 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9"} err="failed to get container status \"43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9\": rpc error: code = NotFound desc = could not find container \"43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9\": container with ID starting with 43fde053a48dfaa252e9d98353d29ee41ac85e7f649ea960f19f6748b1c255c9 not found: ID does not exist" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.831583 5129 scope.go:117] "RemoveContainer" containerID="dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4" Dec 11 16:57:45 crc kubenswrapper[5129]: E1211 16:57:45.831850 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4\": container with ID starting with dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4 not found: ID does not exist" containerID="dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.831892 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4"} err="failed to get container status \"dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4\": rpc error: code = NotFound desc = could not find container \"dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4\": container with ID starting with dd2e45984888a500d895dc51b73e89e0b1bf5707a3f6a0f7d9b4e6bd4b30b7d4 not found: ID does not exist" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.831939 5129 scope.go:117] "RemoveContainer" containerID="0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03" Dec 11 16:57:45 crc kubenswrapper[5129]: E1211 16:57:45.832125 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03\": container with ID starting with 0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03 not found: ID does not exist" containerID="0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.832154 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03"} err="failed to get container status \"0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03\": rpc error: code = NotFound desc = could not find container \"0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03\": container with ID starting with 0bf3f99c26b0f17df3a38c5a891da601aebbf4f4b22f618d3f7622051b193c03 not found: ID does not exist" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.832167 5129 scope.go:117] "RemoveContainer" containerID="1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe" Dec 11 16:57:45 crc kubenswrapper[5129]: E1211 16:57:45.832320 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe\": container with ID starting with 1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe not found: ID does not exist" containerID="1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.832340 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe"} err="failed to get container status \"1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe\": rpc error: code = NotFound desc = could not find container \"1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe\": container with ID starting with 1994a6b02fac155f44d0c95d2b925df1c6a13d5a07e6818c06b991c0e7c909fe not found: ID does not exist" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.832354 5129 scope.go:117] "RemoveContainer" containerID="4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016" Dec 11 16:57:45 crc kubenswrapper[5129]: E1211 16:57:45.832510 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016\": container with ID starting with 4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016 not found: ID does not exist" containerID="4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.832539 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016"} err="failed to get container status \"4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016\": rpc error: code = NotFound desc = could not find container \"4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016\": container with ID starting with 4e99b461b227cd21107a9c3cfdceb4286fde417a613e05d6f338c5fd620e1016 not found: ID does not exist" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.832556 5129 scope.go:117] "RemoveContainer" containerID="8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54" Dec 11 16:57:45 crc kubenswrapper[5129]: E1211 16:57:45.832749 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\": container with ID starting with 8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54 not found: ID does not exist" containerID="8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54" Dec 11 16:57:45 crc kubenswrapper[5129]: I1211 16:57:45.832770 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54"} err="failed to get container status \"8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\": rpc error: code = NotFound desc = could not find container \"8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54\": container with ID starting with 8bccc3bf74d61380f6ccc05e1756238f6cc37aef80384e68b5a167beff229a54 not found: ID does not exist" Dec 11 16:57:46 crc kubenswrapper[5129]: I1211 16:57:46.523631 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:46 crc kubenswrapper[5129]: I1211 16:57:46.524096 5129 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:46 crc kubenswrapper[5129]: I1211 16:57:46.529252 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 11 16:57:50 crc kubenswrapper[5129]: E1211 16:57:50.581114 5129 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:50 crc kubenswrapper[5129]: E1211 16:57:50.581907 5129 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:50 crc kubenswrapper[5129]: E1211 16:57:50.582408 5129 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:50 crc kubenswrapper[5129]: E1211 16:57:50.582949 5129 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:50 crc kubenswrapper[5129]: E1211 16:57:50.583310 5129 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:50 crc kubenswrapper[5129]: I1211 16:57:50.583356 5129 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 16:57:50 crc kubenswrapper[5129]: E1211 16:57:50.583738 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="200ms" Dec 11 16:57:50 crc kubenswrapper[5129]: E1211 16:57:50.785114 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="400ms" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.185894 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="800ms" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.190457 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:57:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:57:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:57:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:57:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.191328 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.191869 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.192348 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.192876 5129 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.192924 5129 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:57:51 crc kubenswrapper[5129]: E1211 16:57:51.987390 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="1.6s" Dec 11 16:57:52 crc kubenswrapper[5129]: E1211 16:57:52.733561 5129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.18:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188037ab6e9ac3b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:57:43.363806135 +0000 UTC m=+207.167336162,LastTimestamp:2025-12-11 16:57:43.363806135 +0000 UTC m=+207.167336162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:57:53 crc kubenswrapper[5129]: E1211 16:57:53.561055 5129 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" volumeName="registry-storage" Dec 11 16:57:53 crc kubenswrapper[5129]: E1211 16:57:53.588182 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="3.2s" Dec 11 16:57:56 crc kubenswrapper[5129]: I1211 16:57:56.527477 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:56 crc kubenswrapper[5129]: I1211 16:57:56.527508 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:56 crc kubenswrapper[5129]: I1211 16:57:56.529830 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:56 crc kubenswrapper[5129]: I1211 16:57:56.555988 5129 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:57:56 crc kubenswrapper[5129]: I1211 16:57:56.556044 5129 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:57:56 crc kubenswrapper[5129]: E1211 16:57:56.556791 5129 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:56 crc kubenswrapper[5129]: I1211 16:57:56.557104 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:56 crc kubenswrapper[5129]: I1211 16:57:56.764930 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"16754a1fecbce59a775438197a52afd7f576f12cb41f55f90c0d1f75de30f612"} Dec 11 16:57:56 crc kubenswrapper[5129]: E1211 16:57:56.789974 5129 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.18:6443: connect: connection refused" interval="6.4s" Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.774775 5129 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="e6b6aa59e63aa4c78d5dbb4b095f85a574f2b2e5db88c616e06a3a6546d5a0f3" exitCode=0 Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.775144 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"e6b6aa59e63aa4c78d5dbb4b095f85a574f2b2e5db88c616e06a3a6546d5a0f3"} Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.775418 5129 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.775446 5129 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:57:57 crc kubenswrapper[5129]: E1211 16:57:57.775936 5129 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.776133 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.779466 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.779569 5129 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be" exitCode=1 Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.779582 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be"} Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.780416 5129 scope.go:117] "RemoveContainer" containerID="c3f246dcdfeee1dbe91ab778d9f9a35df60e5f727f4bf495ec33ca1d3fd0b5be" Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.781201 5129 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:57 crc kubenswrapper[5129]: I1211 16:57:57.781746 5129 status_manager.go:895] "Failed to get status for pod" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.18:6443: connect: connection refused" Dec 11 16:57:58 crc kubenswrapper[5129]: I1211 16:57:58.790732 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0ddb51529ab71316623cb25ad7e8ef0d851eaea4931729b83a0a3d6f83db6562"} Dec 11 16:57:58 crc kubenswrapper[5129]: I1211 16:57:58.791078 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b3f5643d4bbbedf0d1b98d8c9da941ff97f24b47b51ce6ed19258165e544483b"} Dec 11 16:57:58 crc kubenswrapper[5129]: I1211 16:57:58.791088 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3ce6bc8eb102f46cd23666660a8a285b2978dded9b10048e5809008139e9c04c"} Dec 11 16:57:58 crc kubenswrapper[5129]: I1211 16:57:58.802994 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:57:58 crc kubenswrapper[5129]: I1211 16:57:58.803164 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2234b93d54e2b4ff58ea5796095c018f3ccc965814459b494788dc102c8e313c"} Dec 11 16:57:59 crc kubenswrapper[5129]: I1211 16:57:59.813405 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f32f3ca86a0a109bcd74e9290a0ac8d369ef5034421c5a1681672c9bb60c2b40"} Dec 11 16:57:59 crc kubenswrapper[5129]: I1211 16:57:59.813448 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"fe2e5a586ae289cfb4385c309e790fab51bd7027cef022863aaa5ae48ef32b0d"} Dec 11 16:57:59 crc kubenswrapper[5129]: I1211 16:57:59.813793 5129 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:57:59 crc kubenswrapper[5129]: I1211 16:57:59.813808 5129 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:57:59 crc kubenswrapper[5129]: I1211 16:57:59.814019 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:01 crc kubenswrapper[5129]: I1211 16:58:01.557847 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:01 crc kubenswrapper[5129]: I1211 16:58:01.558420 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:01 crc kubenswrapper[5129]: I1211 16:58:01.570647 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:02 crc kubenswrapper[5129]: I1211 16:58:02.419777 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:58:02 crc kubenswrapper[5129]: I1211 16:58:02.427366 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:58:02 crc kubenswrapper[5129]: I1211 16:58:02.831168 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:58:04 crc kubenswrapper[5129]: I1211 16:58:04.840834 5129 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:04 crc kubenswrapper[5129]: I1211 16:58:04.841288 5129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:05 crc kubenswrapper[5129]: I1211 16:58:05.848388 5129 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:58:05 crc kubenswrapper[5129]: I1211 16:58:05.848417 5129 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:58:05 crc kubenswrapper[5129]: I1211 16:58:05.855591 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:06 crc kubenswrapper[5129]: I1211 16:58:06.544296 5129 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="3f38c96d-69d6-4c31-a9a3-519982a0fd4b" Dec 11 16:58:06 crc kubenswrapper[5129]: I1211 16:58:06.854556 5129 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:58:06 crc kubenswrapper[5129]: I1211 16:58:06.854596 5129 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6ed5df1b-2ebd-40ba-b00b-ea7e2a48aa6d" Dec 11 16:58:06 crc kubenswrapper[5129]: I1211 16:58:06.858259 5129 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="3f38c96d-69d6-4c31-a9a3-519982a0fd4b" Dec 11 16:58:08 crc kubenswrapper[5129]: I1211 16:58:08.946506 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:58:08 crc kubenswrapper[5129]: I1211 16:58:08.947289 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:58:13 crc kubenswrapper[5129]: I1211 16:58:13.846990 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:58:14 crc kubenswrapper[5129]: I1211 16:58:14.661679 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 11 16:58:14 crc kubenswrapper[5129]: I1211 16:58:14.766162 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:14 crc kubenswrapper[5129]: I1211 16:58:14.847457 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 11 16:58:15 crc kubenswrapper[5129]: I1211 16:58:15.326741 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 11 16:58:15 crc kubenswrapper[5129]: I1211 16:58:15.560745 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 11 16:58:15 crc kubenswrapper[5129]: I1211 16:58:15.739053 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 11 16:58:15 crc kubenswrapper[5129]: I1211 16:58:15.744891 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 11 16:58:15 crc kubenswrapper[5129]: I1211 16:58:15.940832 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 11 16:58:15 crc kubenswrapper[5129]: I1211 16:58:15.995367 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 11 16:58:16 crc kubenswrapper[5129]: I1211 16:58:16.422822 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.429616 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.484672 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.563289 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.631310 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.693486 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.709393 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.711025 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.754807 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.795205 5129 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.852990 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 11 16:58:17 crc kubenswrapper[5129]: I1211 16:58:17.926634 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.037499 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.058343 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.159159 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.377206 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.434370 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.452024 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.485850 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.544544 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.577378 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.594462 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.613762 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.688915 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.804685 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.987674 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 11 16:58:18 crc kubenswrapper[5129]: I1211 16:58:18.995284 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.077797 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.237143 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.264164 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.397336 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.416893 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.446246 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.499181 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.512136 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.539907 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.551283 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.616846 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.666616 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.686342 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.734496 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.831662 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 11 16:58:19 crc kubenswrapper[5129]: I1211 16:58:19.929772 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.007482 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.108304 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.113482 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.154457 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.156028 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.319834 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.429215 5129 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.452777 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.572772 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.614485 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.656159 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.671588 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.686857 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.770002 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.783928 5129 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.810762 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.897496 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.950103 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 11 16:58:20 crc kubenswrapper[5129]: I1211 16:58:20.963780 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.004963 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.036814 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.091118 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.117454 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.157955 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.157974 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.164603 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.168413 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.226825 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.281261 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.522555 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.590306 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.591921 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.613396 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.663151 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.760089 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.774018 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.795208 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.932241 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 11 16:58:21 crc kubenswrapper[5129]: I1211 16:58:21.947126 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.035598 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.063014 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.097918 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.103135 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.114819 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.188464 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.195336 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.281237 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.300311 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.309100 5129 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.323689 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.329656 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.441944 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.514191 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.682471 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.684824 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.752793 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.761021 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.776342 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.776373 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.790542 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.805691 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.825051 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.841054 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.850001 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.868486 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.870344 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.897251 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 11 16:58:22 crc kubenswrapper[5129]: I1211 16:58:22.965596 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.005834 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.077062 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.114358 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.213208 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.216809 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.259654 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.294261 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.326384 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.420499 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.424762 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.546335 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.583488 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.589588 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.601115 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.676601 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 11 16:58:23 crc kubenswrapper[5129]: I1211 16:58:23.683041 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.020052 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.085965 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.102075 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.104117 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.194009 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.213559 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.323818 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.380317 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.508375 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.557168 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.589490 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.611141 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.785800 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.791825 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.797413 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.818122 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.842549 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.886363 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.888041 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.902323 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:24 crc kubenswrapper[5129]: I1211 16:58:24.913706 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.085188 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.143804 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.185258 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.251894 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.318544 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.329983 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.331350 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.371773 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.384266 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.419131 5129 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.424212 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.424288 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.428379 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.430694 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.443326 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.443303946 podStartE2EDuration="21.443303946s" podCreationTimestamp="2025-12-11 16:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:58:25.439389207 +0000 UTC m=+249.242919254" watchObservedRunningTime="2025-12-11 16:58:25.443303946 +0000 UTC m=+249.246833973" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.472918 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.501221 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.580947 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.680282 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.685826 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.845044 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.879686 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.907859 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.936063 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.951785 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.988567 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 11 16:58:25 crc kubenswrapper[5129]: I1211 16:58:25.994130 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.096094 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.122909 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.140880 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.158657 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.202577 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.231983 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.493591 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.732330 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.733340 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.760019 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.916578 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 11 16:58:26 crc kubenswrapper[5129]: I1211 16:58:26.928165 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.131679 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.207889 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.235901 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.256989 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.328856 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.353309 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.465732 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.512486 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.517154 5129 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.517395 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://87a7806e899c771a4ebecc6158fc742e2ab0165843b3f54119521cc10ff9cac6" gracePeriod=5 Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.535547 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.616196 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.665589 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.670053 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.675631 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.724451 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.787076 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.839948 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.885443 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.890652 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 11 16:58:27 crc kubenswrapper[5129]: I1211 16:58:27.950942 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.026877 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.053318 5129 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.101545 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.137356 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.229413 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.302317 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.317980 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.338106 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.354264 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.370580 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.381485 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.382838 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.426278 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.437727 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.601043 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.621781 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.661994 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:28 crc kubenswrapper[5129]: I1211 16:58:28.936991 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.059182 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.087087 5129 ???:1] "http: TLS handshake error from 192.168.126.11:57168: no serving certificate available for the kubelet" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.095709 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.183621 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.211354 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.224797 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.248269 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.354448 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.439007 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.675545 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.679976 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.890069 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:29 crc kubenswrapper[5129]: I1211 16:58:29.914659 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.102528 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.110603 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.393011 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.550739 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.651537 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.735299 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.758883 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:30 crc kubenswrapper[5129]: I1211 16:58:30.814922 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 11 16:58:31 crc kubenswrapper[5129]: I1211 16:58:31.073816 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:58:31 crc kubenswrapper[5129]: I1211 16:58:31.246414 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:58:31 crc kubenswrapper[5129]: I1211 16:58:31.440979 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 11 16:58:32 crc kubenswrapper[5129]: I1211 16:58:32.500176 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.028390 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.028426 5129 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="87a7806e899c771a4ebecc6158fc742e2ab0165843b3f54119521cc10ff9cac6" exitCode=137 Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.110999 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.111132 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.112921 5129 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217557 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217658 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217687 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217774 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217829 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217856 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217883 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217976 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.217941 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.218286 5129 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.218300 5129 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.218311 5129 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.218324 5129 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.229971 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:58:33 crc kubenswrapper[5129]: I1211 16:58:33.319673 5129 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:34 crc kubenswrapper[5129]: I1211 16:58:34.035302 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 11 16:58:34 crc kubenswrapper[5129]: I1211 16:58:34.035489 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:58:34 crc kubenswrapper[5129]: I1211 16:58:34.035623 5129 scope.go:117] "RemoveContainer" containerID="87a7806e899c771a4ebecc6158fc742e2ab0165843b3f54119521cc10ff9cac6" Dec 11 16:58:34 crc kubenswrapper[5129]: I1211 16:58:34.055698 5129 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 11 16:58:34 crc kubenswrapper[5129]: I1211 16:58:34.528555 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 11 16:58:35 crc kubenswrapper[5129]: I1211 16:58:35.992538 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mjbrt"] Dec 11 16:58:35 crc kubenswrapper[5129]: I1211 16:58:35.993428 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mjbrt" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="registry-server" containerID="cri-o://7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" gracePeriod=30 Dec 11 16:58:35 crc kubenswrapper[5129]: I1211 16:58:35.998951 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc2p6"] Dec 11 16:58:35 crc kubenswrapper[5129]: I1211 16:58:35.999232 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nc2p6" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="registry-server" containerID="cri-o://f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" gracePeriod=30 Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.016882 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dkmmj"] Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.017238 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" podUID="06554c04-9d86-4813-b92c-669a3ae5a776" containerName="marketplace-operator" containerID="cri-o://976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947" gracePeriod=30 Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.022756 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7khq8"] Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.023077 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7khq8" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="registry-server" containerID="cri-o://0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa" gracePeriod=30 Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.035713 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcf8z"] Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.036144 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dcf8z" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="registry-server" containerID="cri-o://2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2" gracePeriod=30 Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.043637 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-jbbw9"] Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.044358 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.044378 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.044393 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" containerName="installer" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.044400 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" containerName="installer" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.044537 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="354124a4-f72e-48af-b7ae-77e8990c6c47" containerName="installer" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.044555 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.050438 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-jbbw9"] Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.050606 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.071419 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5263d35-d90b-4666-9b5a-ea8148099ead-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.071483 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5263d35-d90b-4666-9b5a-ea8148099ead-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.071633 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5263d35-d90b-4666-9b5a-ea8148099ead-tmp\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.071707 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hqfw\" (UniqueName: \"kubernetes.io/projected/e5263d35-d90b-4666-9b5a-ea8148099ead-kube-api-access-4hqfw\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.153103 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39 is running failed: container process not found" containerID="f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.153196 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97 is running failed: container process not found" containerID="7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.153735 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39 is running failed: container process not found" containerID="f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.153969 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97 is running failed: container process not found" containerID="7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.154671 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39 is running failed: container process not found" containerID="f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.154779 5129 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-nc2p6" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="registry-server" probeResult="unknown" Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.154702 5129 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97 is running failed: container process not found" containerID="7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 16:58:36 crc kubenswrapper[5129]: E1211 16:58:36.155136 5129 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mjbrt" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="registry-server" probeResult="unknown" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.172362 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5263d35-d90b-4666-9b5a-ea8148099ead-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.172419 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5263d35-d90b-4666-9b5a-ea8148099ead-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.172449 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5263d35-d90b-4666-9b5a-ea8148099ead-tmp\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.172479 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4hqfw\" (UniqueName: \"kubernetes.io/projected/e5263d35-d90b-4666-9b5a-ea8148099ead-kube-api-access-4hqfw\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.173422 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5263d35-d90b-4666-9b5a-ea8148099ead-tmp\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.174120 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5263d35-d90b-4666-9b5a-ea8148099ead-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.183579 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e5263d35-d90b-4666-9b5a-ea8148099ead-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.199741 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hqfw\" (UniqueName: \"kubernetes.io/projected/e5263d35-d90b-4666-9b5a-ea8148099ead-kube-api-access-4hqfw\") pod \"marketplace-operator-547dbd544d-jbbw9\" (UID: \"e5263d35-d90b-4666-9b5a-ea8148099ead\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.415130 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.418194 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.426141 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.450103 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.451869 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.457503 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479314 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06554c04-9d86-4813-b92c-669a3ae5a776-tmp\") pod \"06554c04-9d86-4813-b92c-669a3ae5a776\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479400 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-catalog-content\") pod \"2cc34f9f-085b-445c-b10d-e6241e66f722\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479481 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-trusted-ca\") pod \"06554c04-9d86-4813-b92c-669a3ae5a776\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479541 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-operator-metrics\") pod \"06554c04-9d86-4813-b92c-669a3ae5a776\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479568 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x8s8\" (UniqueName: \"kubernetes.io/projected/06554c04-9d86-4813-b92c-669a3ae5a776-kube-api-access-4x8s8\") pod \"06554c04-9d86-4813-b92c-669a3ae5a776\" (UID: \"06554c04-9d86-4813-b92c-669a3ae5a776\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479596 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-utilities\") pod \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479617 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-catalog-content\") pod \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479926 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b672j\" (UniqueName: \"kubernetes.io/projected/2cc34f9f-085b-445c-b10d-e6241e66f722-kube-api-access-b672j\") pod \"2cc34f9f-085b-445c-b10d-e6241e66f722\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479955 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06554c04-9d86-4813-b92c-669a3ae5a776-tmp" (OuterVolumeSpecName: "tmp") pod "06554c04-9d86-4813-b92c-669a3ae5a776" (UID: "06554c04-9d86-4813-b92c-669a3ae5a776"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.479971 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-catalog-content\") pod \"b51f4fcc-9be5-4925-b35e-75dca772e189\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480057 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czdst\" (UniqueName: \"kubernetes.io/projected/7e5898b2-33b2-465b-bf38-07d11c8f67f1-kube-api-access-czdst\") pod \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480143 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-utilities\") pod \"2cc34f9f-085b-445c-b10d-e6241e66f722\" (UID: \"2cc34f9f-085b-445c-b10d-e6241e66f722\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480188 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-catalog-content\") pod \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480218 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-utilities\") pod \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\" (UID: \"7e5898b2-33b2-465b-bf38-07d11c8f67f1\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480281 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l79rw\" (UniqueName: \"kubernetes.io/projected/b51f4fcc-9be5-4925-b35e-75dca772e189-kube-api-access-l79rw\") pod \"b51f4fcc-9be5-4925-b35e-75dca772e189\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480372 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfcdh\" (UniqueName: \"kubernetes.io/projected/55afdb67-75d7-4db9-bee0-95e43c4a07bd-kube-api-access-cfcdh\") pod \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\" (UID: \"55afdb67-75d7-4db9-bee0-95e43c4a07bd\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480436 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-utilities\") pod \"b51f4fcc-9be5-4925-b35e-75dca772e189\" (UID: \"b51f4fcc-9be5-4925-b35e-75dca772e189\") " Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.480769 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/06554c04-9d86-4813-b92c-669a3ae5a776-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.482669 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-utilities" (OuterVolumeSpecName: "utilities") pod "2cc34f9f-085b-445c-b10d-e6241e66f722" (UID: "2cc34f9f-085b-445c-b10d-e6241e66f722"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.486977 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-utilities" (OuterVolumeSpecName: "utilities") pod "55afdb67-75d7-4db9-bee0-95e43c4a07bd" (UID: "55afdb67-75d7-4db9-bee0-95e43c4a07bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.487181 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-utilities" (OuterVolumeSpecName: "utilities") pod "7e5898b2-33b2-465b-bf38-07d11c8f67f1" (UID: "7e5898b2-33b2-465b-bf38-07d11c8f67f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.487318 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5898b2-33b2-465b-bf38-07d11c8f67f1-kube-api-access-czdst" (OuterVolumeSpecName: "kube-api-access-czdst") pod "7e5898b2-33b2-465b-bf38-07d11c8f67f1" (UID: "7e5898b2-33b2-465b-bf38-07d11c8f67f1"). InnerVolumeSpecName "kube-api-access-czdst". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.488164 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "06554c04-9d86-4813-b92c-669a3ae5a776" (UID: "06554c04-9d86-4813-b92c-669a3ae5a776"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.488657 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc34f9f-085b-445c-b10d-e6241e66f722-kube-api-access-b672j" (OuterVolumeSpecName: "kube-api-access-b672j") pod "2cc34f9f-085b-445c-b10d-e6241e66f722" (UID: "2cc34f9f-085b-445c-b10d-e6241e66f722"). InnerVolumeSpecName "kube-api-access-b672j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.488678 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "06554c04-9d86-4813-b92c-669a3ae5a776" (UID: "06554c04-9d86-4813-b92c-669a3ae5a776"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.490724 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-utilities" (OuterVolumeSpecName: "utilities") pod "b51f4fcc-9be5-4925-b35e-75dca772e189" (UID: "b51f4fcc-9be5-4925-b35e-75dca772e189"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.492440 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55afdb67-75d7-4db9-bee0-95e43c4a07bd-kube-api-access-cfcdh" (OuterVolumeSpecName: "kube-api-access-cfcdh") pod "55afdb67-75d7-4db9-bee0-95e43c4a07bd" (UID: "55afdb67-75d7-4db9-bee0-95e43c4a07bd"). InnerVolumeSpecName "kube-api-access-cfcdh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.493189 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51f4fcc-9be5-4925-b35e-75dca772e189-kube-api-access-l79rw" (OuterVolumeSpecName: "kube-api-access-l79rw") pod "b51f4fcc-9be5-4925-b35e-75dca772e189" (UID: "b51f4fcc-9be5-4925-b35e-75dca772e189"). InnerVolumeSpecName "kube-api-access-l79rw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.493686 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06554c04-9d86-4813-b92c-669a3ae5a776-kube-api-access-4x8s8" (OuterVolumeSpecName: "kube-api-access-4x8s8") pod "06554c04-9d86-4813-b92c-669a3ae5a776" (UID: "06554c04-9d86-4813-b92c-669a3ae5a776"). InnerVolumeSpecName "kube-api-access-4x8s8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.509426 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e5898b2-33b2-465b-bf38-07d11c8f67f1" (UID: "7e5898b2-33b2-465b-bf38-07d11c8f67f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.530982 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55afdb67-75d7-4db9-bee0-95e43c4a07bd" (UID: "55afdb67-75d7-4db9-bee0-95e43c4a07bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.558235 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b51f4fcc-9be5-4925-b35e-75dca772e189" (UID: "b51f4fcc-9be5-4925-b35e-75dca772e189"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.575326 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cc34f9f-085b-445c-b10d-e6241e66f722" (UID: "2cc34f9f-085b-445c-b10d-e6241e66f722"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.581994 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l79rw\" (UniqueName: \"kubernetes.io/projected/b51f4fcc-9be5-4925-b35e-75dca772e189-kube-api-access-l79rw\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582026 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfcdh\" (UniqueName: \"kubernetes.io/projected/55afdb67-75d7-4db9-bee0-95e43c4a07bd-kube-api-access-cfcdh\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582039 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582052 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582062 5129 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582072 5129 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/06554c04-9d86-4813-b92c-669a3ae5a776-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582082 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4x8s8\" (UniqueName: \"kubernetes.io/projected/06554c04-9d86-4813-b92c-669a3ae5a776-kube-api-access-4x8s8\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582093 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582102 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55afdb67-75d7-4db9-bee0-95e43c4a07bd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582112 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b672j\" (UniqueName: \"kubernetes.io/projected/2cc34f9f-085b-445c-b10d-e6241e66f722-kube-api-access-b672j\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582122 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b51f4fcc-9be5-4925-b35e-75dca772e189-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582132 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-czdst\" (UniqueName: \"kubernetes.io/projected/7e5898b2-33b2-465b-bf38-07d11c8f67f1-kube-api-access-czdst\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582143 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc34f9f-085b-445c-b10d-e6241e66f722-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582152 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.582162 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5898b2-33b2-465b-bf38-07d11c8f67f1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:36 crc kubenswrapper[5129]: I1211 16:58:36.619208 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-jbbw9"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.061314 5129 generic.go:358] "Generic (PLEG): container finished" podID="06554c04-9d86-4813-b92c-669a3ae5a776" containerID="976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947" exitCode=0 Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.061392 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.062714 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" event={"ID":"06554c04-9d86-4813-b92c-669a3ae5a776","Type":"ContainerDied","Data":"976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.062914 5129 generic.go:358] "Generic (PLEG): container finished" podID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerID="7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" exitCode=0 Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.063062 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mjbrt" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.063177 5129 scope.go:117] "RemoveContainer" containerID="976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.064828 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dkmmj" event={"ID":"06554c04-9d86-4813-b92c-669a3ae5a776","Type":"ContainerDied","Data":"afc5e3a637f02bdc1ddb8a26718db935635814bf3d38cd42798e78fe34b143d6"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.064862 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjbrt" event={"ID":"55afdb67-75d7-4db9-bee0-95e43c4a07bd","Type":"ContainerDied","Data":"7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.064876 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mjbrt" event={"ID":"55afdb67-75d7-4db9-bee0-95e43c4a07bd","Type":"ContainerDied","Data":"33283e4d339113fd865282947bd428f139cec50c0cb633e540a138e200c554c5"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.066609 5129 generic.go:358] "Generic (PLEG): container finished" podID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerID="2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2" exitCode=0 Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.066669 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcf8z" event={"ID":"2cc34f9f-085b-445c-b10d-e6241e66f722","Type":"ContainerDied","Data":"2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.066693 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcf8z" event={"ID":"2cc34f9f-085b-445c-b10d-e6241e66f722","Type":"ContainerDied","Data":"09de71e3f6d9aa7d4231c4f73ae5b1096e745212672541fdd0a0b89c80555b7e"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.066769 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcf8z" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.068552 5129 generic.go:358] "Generic (PLEG): container finished" podID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerID="0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa" exitCode=0 Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.068677 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7khq8" event={"ID":"7e5898b2-33b2-465b-bf38-07d11c8f67f1","Type":"ContainerDied","Data":"0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.068752 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7khq8" event={"ID":"7e5898b2-33b2-465b-bf38-07d11c8f67f1","Type":"ContainerDied","Data":"bb8ed1027c2f1c9b535345acd61e47f7299fddb1d9b5f7fa0b449e4acd1b589c"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.068902 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7khq8" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.070361 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" event={"ID":"e5263d35-d90b-4666-9b5a-ea8148099ead","Type":"ContainerStarted","Data":"1b8795fd50c7b8e1fa59e298e796e04c20331c1e7ca973a63dc1e754791a72b1"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.070401 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" event={"ID":"e5263d35-d90b-4666-9b5a-ea8148099ead","Type":"ContainerStarted","Data":"5404e71eacc369916766d9a5167113bb0fa9a052a7ead6f1f027f41c7d44fe29"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.073228 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.075975 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.077305 5129 generic.go:358] "Generic (PLEG): container finished" podID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerID="f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" exitCode=0 Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.077390 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc2p6" event={"ID":"b51f4fcc-9be5-4925-b35e-75dca772e189","Type":"ContainerDied","Data":"f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.077413 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc2p6" event={"ID":"b51f4fcc-9be5-4925-b35e-75dca772e189","Type":"ContainerDied","Data":"b6957f66d878d2c7a8427fe6b289c618c4f6caba2b837e5adf122b23a89ce9e6"} Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.077453 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc2p6" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.087770 5129 scope.go:117] "RemoveContainer" containerID="976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.088245 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947\": container with ID starting with 976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947 not found: ID does not exist" containerID="976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.088400 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947"} err="failed to get container status \"976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947\": rpc error: code = NotFound desc = could not find container \"976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947\": container with ID starting with 976932f89aef8e0eba03438de5ae4c46e65e4ed27871e4a6119176044a02b947 not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.088458 5129 scope.go:117] "RemoveContainer" containerID="7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.095201 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dkmmj"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.102528 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dkmmj"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.107358 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcf8z"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.110884 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dcf8z"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.133267 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-jbbw9" podStartSLOduration=1.133245489 podStartE2EDuration="1.133245489s" podCreationTimestamp="2025-12-11 16:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:58:37.126955756 +0000 UTC m=+260.930485803" watchObservedRunningTime="2025-12-11 16:58:37.133245489 +0000 UTC m=+260.936775516" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.145727 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7khq8"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.162808 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7khq8"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.164191 5129 scope.go:117] "RemoveContainer" containerID="e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.173666 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mjbrt"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.177693 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mjbrt"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.180670 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc2p6"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.182201 5129 scope.go:117] "RemoveContainer" containerID="22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.183545 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nc2p6"] Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.198157 5129 scope.go:117] "RemoveContainer" containerID="7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.198540 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97\": container with ID starting with 7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97 not found: ID does not exist" containerID="7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.198575 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97"} err="failed to get container status \"7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97\": rpc error: code = NotFound desc = could not find container \"7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97\": container with ID starting with 7aeb53e18a8ea861a67b099076060e774bbcf01d66614aefd340a941452d8e97 not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.198598 5129 scope.go:117] "RemoveContainer" containerID="e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.198841 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a\": container with ID starting with e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a not found: ID does not exist" containerID="e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.198886 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a"} err="failed to get container status \"e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a\": rpc error: code = NotFound desc = could not find container \"e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a\": container with ID starting with e129523234c848c5aa1d43fb5548dc7670935e9dee99d1639d9cd4aa3f5a9c6a not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.198900 5129 scope.go:117] "RemoveContainer" containerID="22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.199298 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db\": container with ID starting with 22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db not found: ID does not exist" containerID="22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.199384 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db"} err="failed to get container status \"22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db\": rpc error: code = NotFound desc = could not find container \"22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db\": container with ID starting with 22e49221ed4c43060471d8d93dc9178313c532058e3064d5ba0281865fd450db not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.199462 5129 scope.go:117] "RemoveContainer" containerID="2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.214977 5129 scope.go:117] "RemoveContainer" containerID="b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.228082 5129 scope.go:117] "RemoveContainer" containerID="33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.242442 5129 scope.go:117] "RemoveContainer" containerID="2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.242846 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2\": container with ID starting with 2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2 not found: ID does not exist" containerID="2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.242883 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2"} err="failed to get container status \"2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2\": rpc error: code = NotFound desc = could not find container \"2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2\": container with ID starting with 2ac45a238c648eacbc3d86d97e0e4607bf9bedfb30bdd3c73671945e8b0284f2 not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.242906 5129 scope.go:117] "RemoveContainer" containerID="b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.243199 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34\": container with ID starting with b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34 not found: ID does not exist" containerID="b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.243223 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34"} err="failed to get container status \"b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34\": rpc error: code = NotFound desc = could not find container \"b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34\": container with ID starting with b9865df1be6f3f7ad5ce50bce538a45171dbec7a0874016cf32f410282142d34 not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.243239 5129 scope.go:117] "RemoveContainer" containerID="33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.243569 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e\": container with ID starting with 33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e not found: ID does not exist" containerID="33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.243592 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e"} err="failed to get container status \"33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e\": rpc error: code = NotFound desc = could not find container \"33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e\": container with ID starting with 33a7581915d4449c053eaed9b44bc64106313bfb53e4671a7d4c48915b76770e not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.243607 5129 scope.go:117] "RemoveContainer" containerID="0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.261531 5129 scope.go:117] "RemoveContainer" containerID="48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.276577 5129 scope.go:117] "RemoveContainer" containerID="45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.287794 5129 scope.go:117] "RemoveContainer" containerID="0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.289457 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa\": container with ID starting with 0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa not found: ID does not exist" containerID="0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.289497 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa"} err="failed to get container status \"0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa\": rpc error: code = NotFound desc = could not find container \"0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa\": container with ID starting with 0e4f43a2ca52b01f35afa960ddafcc6f724ce0187b901ab9c3275de23e27cdaa not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.289538 5129 scope.go:117] "RemoveContainer" containerID="48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.289813 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c\": container with ID starting with 48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c not found: ID does not exist" containerID="48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.289847 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c"} err="failed to get container status \"48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c\": rpc error: code = NotFound desc = could not find container \"48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c\": container with ID starting with 48cc6a89a81d44f2dea32d0befef65f1199d91a6cb476f7c9f7154c302844e6c not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.289868 5129 scope.go:117] "RemoveContainer" containerID="45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.290062 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf\": container with ID starting with 45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf not found: ID does not exist" containerID="45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.290091 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf"} err="failed to get container status \"45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf\": rpc error: code = NotFound desc = could not find container \"45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf\": container with ID starting with 45b0e650effadce535b63c29f645c01e3dcae7c3cd5ad82fc52911bb4bee88bf not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.290108 5129 scope.go:117] "RemoveContainer" containerID="f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.302754 5129 scope.go:117] "RemoveContainer" containerID="989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.314535 5129 scope.go:117] "RemoveContainer" containerID="4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.332459 5129 scope.go:117] "RemoveContainer" containerID="f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.334253 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39\": container with ID starting with f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39 not found: ID does not exist" containerID="f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.334296 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39"} err="failed to get container status \"f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39\": rpc error: code = NotFound desc = could not find container \"f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39\": container with ID starting with f29a08793813dd488993476f866c008ae96f0ea1dce78ee4fc7a0b246ee95e39 not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.334324 5129 scope.go:117] "RemoveContainer" containerID="989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.334812 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5\": container with ID starting with 989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5 not found: ID does not exist" containerID="989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.334852 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5"} err="failed to get container status \"989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5\": rpc error: code = NotFound desc = could not find container \"989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5\": container with ID starting with 989b53d61e1ca83244ece427e258952600287393489d348c004ce713e4b540c5 not found: ID does not exist" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.334879 5129 scope.go:117] "RemoveContainer" containerID="4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a" Dec 11 16:58:37 crc kubenswrapper[5129]: E1211 16:58:37.335176 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a\": container with ID starting with 4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a not found: ID does not exist" containerID="4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a" Dec 11 16:58:37 crc kubenswrapper[5129]: I1211 16:58:37.335228 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a"} err="failed to get container status \"4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a\": rpc error: code = NotFound desc = could not find container \"4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a\": container with ID starting with 4a9efd56d96b4423a70ebbbbed5cc3a1e23fc1e409d2624a7696546113e0cc4a not found: ID does not exist" Dec 11 16:58:38 crc kubenswrapper[5129]: I1211 16:58:38.528113 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06554c04-9d86-4813-b92c-669a3ae5a776" path="/var/lib/kubelet/pods/06554c04-9d86-4813-b92c-669a3ae5a776/volumes" Dec 11 16:58:38 crc kubenswrapper[5129]: I1211 16:58:38.528614 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" path="/var/lib/kubelet/pods/2cc34f9f-085b-445c-b10d-e6241e66f722/volumes" Dec 11 16:58:38 crc kubenswrapper[5129]: I1211 16:58:38.529169 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" path="/var/lib/kubelet/pods/55afdb67-75d7-4db9-bee0-95e43c4a07bd/volumes" Dec 11 16:58:38 crc kubenswrapper[5129]: I1211 16:58:38.529868 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" path="/var/lib/kubelet/pods/7e5898b2-33b2-465b-bf38-07d11c8f67f1/volumes" Dec 11 16:58:38 crc kubenswrapper[5129]: I1211 16:58:38.530452 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" path="/var/lib/kubelet/pods/b51f4fcc-9be5-4925-b35e-75dca772e189/volumes" Dec 11 16:58:38 crc kubenswrapper[5129]: I1211 16:58:38.946796 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:58:38 crc kubenswrapper[5129]: I1211 16:58:38.946882 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:58:44 crc kubenswrapper[5129]: I1211 16:58:44.923833 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 11 16:58:57 crc kubenswrapper[5129]: I1211 16:58:57.938658 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fmrcf"] Dec 11 16:58:57 crc kubenswrapper[5129]: I1211 16:58:57.939379 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" podUID="da272c91-3742-497e-b116-40d44d676527" containerName="controller-manager" containerID="cri-o://a99a74c8ca8580cdde61365c5554058f594791e353ac6109e1af3bdcbf3b9ec3" gracePeriod=30 Dec 11 16:58:57 crc kubenswrapper[5129]: I1211 16:58:57.973036 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl"] Dec 11 16:58:57 crc kubenswrapper[5129]: I1211 16:58:57.973279 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" podUID="94785b3a-cb7c-426f-ab27-b74c298f40f2" containerName="route-controller-manager" containerID="cri-o://7c3d6ffc49391070c1e2f5c389a49688d9ef021c6aba9f7fd9a1ff87f2a816c3" gracePeriod=30 Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.213694 5129 generic.go:358] "Generic (PLEG): container finished" podID="94785b3a-cb7c-426f-ab27-b74c298f40f2" containerID="7c3d6ffc49391070c1e2f5c389a49688d9ef021c6aba9f7fd9a1ff87f2a816c3" exitCode=0 Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.213778 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" event={"ID":"94785b3a-cb7c-426f-ab27-b74c298f40f2","Type":"ContainerDied","Data":"7c3d6ffc49391070c1e2f5c389a49688d9ef021c6aba9f7fd9a1ff87f2a816c3"} Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.218281 5129 generic.go:358] "Generic (PLEG): container finished" podID="da272c91-3742-497e-b116-40d44d676527" containerID="a99a74c8ca8580cdde61365c5554058f594791e353ac6109e1af3bdcbf3b9ec3" exitCode=0 Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.218377 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" event={"ID":"da272c91-3742-497e-b116-40d44d676527","Type":"ContainerDied","Data":"a99a74c8ca8580cdde61365c5554058f594791e353ac6109e1af3bdcbf3b9ec3"} Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.266319 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.305971 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-668874fb99-m6gdp"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306746 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306766 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306781 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da272c91-3742-497e-b116-40d44d676527" containerName="controller-manager" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306787 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="da272c91-3742-497e-b116-40d44d676527" containerName="controller-manager" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306796 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306803 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306819 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306826 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306834 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306840 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306850 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306858 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306876 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="06554c04-9d86-4813-b92c-669a3ae5a776" containerName="marketplace-operator" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306883 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="06554c04-9d86-4813-b92c-669a3ae5a776" containerName="marketplace-operator" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306902 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.306909 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307341 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307374 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307457 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307465 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307474 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307482 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307500 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307542 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307556 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307562 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="extract-content" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307571 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307578 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="extract-utilities" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307670 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e5898b2-33b2-465b-bf38-07d11c8f67f1" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307679 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="2cc34f9f-085b-445c-b10d-e6241e66f722" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307686 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="55afdb67-75d7-4db9-bee0-95e43c4a07bd" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307694 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="b51f4fcc-9be5-4925-b35e-75dca772e189" containerName="registry-server" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307704 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="da272c91-3742-497e-b116-40d44d676527" containerName="controller-manager" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.307712 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="06554c04-9d86-4813-b92c-669a3ae5a776" containerName="marketplace-operator" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.313763 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-668874fb99-m6gdp"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.313871 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.334284 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.368783 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.369585 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94785b3a-cb7c-426f-ab27-b74c298f40f2" containerName="route-controller-manager" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.369617 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="94785b3a-cb7c-426f-ab27-b74c298f40f2" containerName="route-controller-manager" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.369844 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="94785b3a-cb7c-426f-ab27-b74c298f40f2" containerName="route-controller-manager" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.369920 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da272c91-3742-497e-b116-40d44d676527-serving-cert\") pod \"da272c91-3742-497e-b116-40d44d676527\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.370051 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-client-ca\") pod \"da272c91-3742-497e-b116-40d44d676527\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.370109 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-proxy-ca-bundles\") pod \"da272c91-3742-497e-b116-40d44d676527\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.370135 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-config\") pod \"da272c91-3742-497e-b116-40d44d676527\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.370167 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72mtq\" (UniqueName: \"kubernetes.io/projected/da272c91-3742-497e-b116-40d44d676527-kube-api-access-72mtq\") pod \"da272c91-3742-497e-b116-40d44d676527\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.370222 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/da272c91-3742-497e-b116-40d44d676527-tmp\") pod \"da272c91-3742-497e-b116-40d44d676527\" (UID: \"da272c91-3742-497e-b116-40d44d676527\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.370654 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da272c91-3742-497e-b116-40d44d676527-tmp" (OuterVolumeSpecName: "tmp") pod "da272c91-3742-497e-b116-40d44d676527" (UID: "da272c91-3742-497e-b116-40d44d676527"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.370962 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-client-ca" (OuterVolumeSpecName: "client-ca") pod "da272c91-3742-497e-b116-40d44d676527" (UID: "da272c91-3742-497e-b116-40d44d676527"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.371040 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "da272c91-3742-497e-b116-40d44d676527" (UID: "da272c91-3742-497e-b116-40d44d676527"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.371052 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-config" (OuterVolumeSpecName: "config") pod "da272c91-3742-497e-b116-40d44d676527" (UID: "da272c91-3742-497e-b116-40d44d676527"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.375189 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.375306 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.376447 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da272c91-3742-497e-b116-40d44d676527-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da272c91-3742-497e-b116-40d44d676527" (UID: "da272c91-3742-497e-b116-40d44d676527"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.378210 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da272c91-3742-497e-b116-40d44d676527-kube-api-access-72mtq" (OuterVolumeSpecName: "kube-api-access-72mtq") pod "da272c91-3742-497e-b116-40d44d676527" (UID: "da272c91-3742-497e-b116-40d44d676527"). InnerVolumeSpecName "kube-api-access-72mtq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.471555 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l44cs\" (UniqueName: \"kubernetes.io/projected/94785b3a-cb7c-426f-ab27-b74c298f40f2-kube-api-access-l44cs\") pod \"94785b3a-cb7c-426f-ab27-b74c298f40f2\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.471692 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-config\") pod \"94785b3a-cb7c-426f-ab27-b74c298f40f2\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.471740 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94785b3a-cb7c-426f-ab27-b74c298f40f2-tmp\") pod \"94785b3a-cb7c-426f-ab27-b74c298f40f2\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.471826 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94785b3a-cb7c-426f-ab27-b74c298f40f2-serving-cert\") pod \"94785b3a-cb7c-426f-ab27-b74c298f40f2\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.471880 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-client-ca\") pod \"94785b3a-cb7c-426f-ab27-b74c298f40f2\" (UID: \"94785b3a-cb7c-426f-ab27-b74c298f40f2\") " Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472195 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94785b3a-cb7c-426f-ab27-b74c298f40f2-tmp" (OuterVolumeSpecName: "tmp") pod "94785b3a-cb7c-426f-ab27-b74c298f40f2" (UID: "94785b3a-cb7c-426f-ab27-b74c298f40f2"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472281 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b764289-5730-4634-97af-17dd762c64da-serving-cert\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472375 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-client-ca\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472417 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-client-ca\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472447 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sm28\" (UniqueName: \"kubernetes.io/projected/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-kube-api-access-5sm28\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472481 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b764289-5730-4634-97af-17dd762c64da-tmp\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472533 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-client-ca" (OuterVolumeSpecName: "client-ca") pod "94785b3a-cb7c-426f-ab27-b74c298f40f2" (UID: "94785b3a-cb7c-426f-ab27-b74c298f40f2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472578 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-proxy-ca-bundles\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472588 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-config" (OuterVolumeSpecName: "config") pod "94785b3a-cb7c-426f-ab27-b74c298f40f2" (UID: "94785b3a-cb7c-426f-ab27-b74c298f40f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472726 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-tmp\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472824 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-config\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472871 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqjx\" (UniqueName: \"kubernetes.io/projected/0b764289-5730-4634-97af-17dd762c64da-kube-api-access-7tqjx\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.472897 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-config\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473016 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-serving-cert\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473108 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-72mtq\" (UniqueName: \"kubernetes.io/projected/da272c91-3742-497e-b116-40d44d676527-kube-api-access-72mtq\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473121 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473134 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/da272c91-3742-497e-b116-40d44d676527-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473143 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da272c91-3742-497e-b116-40d44d676527-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473153 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94785b3a-cb7c-426f-ab27-b74c298f40f2-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473161 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94785b3a-cb7c-426f-ab27-b74c298f40f2-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473169 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473178 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.473186 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da272c91-3742-497e-b116-40d44d676527-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.476114 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94785b3a-cb7c-426f-ab27-b74c298f40f2-kube-api-access-l44cs" (OuterVolumeSpecName: "kube-api-access-l44cs") pod "94785b3a-cb7c-426f-ab27-b74c298f40f2" (UID: "94785b3a-cb7c-426f-ab27-b74c298f40f2"). InnerVolumeSpecName "kube-api-access-l44cs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.478325 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94785b3a-cb7c-426f-ab27-b74c298f40f2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "94785b3a-cb7c-426f-ab27-b74c298f40f2" (UID: "94785b3a-cb7c-426f-ab27-b74c298f40f2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574405 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-config\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574453 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqjx\" (UniqueName: \"kubernetes.io/projected/0b764289-5730-4634-97af-17dd762c64da-kube-api-access-7tqjx\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574483 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-config\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574531 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-serving-cert\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574580 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b764289-5730-4634-97af-17dd762c64da-serving-cert\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574607 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-client-ca\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574636 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-client-ca\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574657 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5sm28\" (UniqueName: \"kubernetes.io/projected/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-kube-api-access-5sm28\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574679 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b764289-5730-4634-97af-17dd762c64da-tmp\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574711 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-proxy-ca-bundles\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574753 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-tmp\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574798 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l44cs\" (UniqueName: \"kubernetes.io/projected/94785b3a-cb7c-426f-ab27-b74c298f40f2-kube-api-access-l44cs\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.574815 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94785b3a-cb7c-426f-ab27-b74c298f40f2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.576348 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b764289-5730-4634-97af-17dd762c64da-tmp\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.576560 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-proxy-ca-bundles\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.576856 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-client-ca\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.577242 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-client-ca\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.577307 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-tmp\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.579216 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b764289-5730-4634-97af-17dd762c64da-serving-cert\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.584593 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-config\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.585700 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-serving-cert\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.586640 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-config\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.595354 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqjx\" (UniqueName: \"kubernetes.io/projected/0b764289-5730-4634-97af-17dd762c64da-kube-api-access-7tqjx\") pod \"route-controller-manager-58fcbccb67-dwbh2\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.607885 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sm28\" (UniqueName: \"kubernetes.io/projected/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-kube-api-access-5sm28\") pod \"controller-manager-668874fb99-m6gdp\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.622853 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5hlcw"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.630036 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.632451 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hlcw"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.640372 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.646566 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.692448 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.777337 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-catalog-content\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.777401 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbc7r\" (UniqueName: \"kubernetes.io/projected/3b8f5017-2bba-4282-afb7-a8728ec2a378-kube-api-access-jbc7r\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.777436 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-utilities\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.820221 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fx8mc"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.828642 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.830932 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fx8mc"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.831822 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.862992 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-668874fb99-m6gdp"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.879091 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-catalog-content\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.879174 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbc7r\" (UniqueName: \"kubernetes.io/projected/3b8f5017-2bba-4282-afb7-a8728ec2a378-kube-api-access-jbc7r\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.879227 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-utilities\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.879535 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-catalog-content\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.879675 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-utilities\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.899603 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbc7r\" (UniqueName: \"kubernetes.io/projected/3b8f5017-2bba-4282-afb7-a8728ec2a378-kube-api-access-jbc7r\") pod \"redhat-marketplace-5hlcw\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.912665 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2"] Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.964448 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.980446 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-catalog-content\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.980480 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfgk\" (UniqueName: \"kubernetes.io/projected/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-kube-api-access-jtfgk\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:58 crc kubenswrapper[5129]: I1211 16:58:58.980550 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-utilities\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.094172 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55778: no serving certificate available for the kubelet" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.094394 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-utilities\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.094497 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-catalog-content\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.094549 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtfgk\" (UniqueName: \"kubernetes.io/projected/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-kube-api-access-jtfgk\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.095856 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-utilities\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.096116 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-catalog-content\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.112653 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtfgk\" (UniqueName: \"kubernetes.io/projected/ab0a7b94-dc15-4b07-b413-f20fde6d7a72-kube-api-access-jtfgk\") pod \"community-operators-fx8mc\" (UID: \"ab0a7b94-dc15-4b07-b413-f20fde6d7a72\") " pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.149113 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.221758 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hlcw"] Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.236004 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" event={"ID":"da272c91-3742-497e-b116-40d44d676527","Type":"ContainerDied","Data":"385eda6023b7bb9a81b8279c1d41615e257359d69e0ba0885b24119f286f30cc"} Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.236052 5129 scope.go:117] "RemoveContainer" containerID="a99a74c8ca8580cdde61365c5554058f594791e353ac6109e1af3bdcbf3b9ec3" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.236552 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-fmrcf" Dec 11 16:58:59 crc kubenswrapper[5129]: W1211 16:58:59.239382 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b8f5017_2bba_4282_afb7_a8728ec2a378.slice/crio-07d6b227155851c3f886391ed32ea13c4e22f3b8eb96955989a4cce7b007484b WatchSource:0}: Error finding container 07d6b227155851c3f886391ed32ea13c4e22f3b8eb96955989a4cce7b007484b: Status 404 returned error can't find the container with id 07d6b227155851c3f886391ed32ea13c4e22f3b8eb96955989a4cce7b007484b Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.245483 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" event={"ID":"0b764289-5730-4634-97af-17dd762c64da","Type":"ContainerStarted","Data":"a99f9a81432b25b62ab46639622e9f6f5447fd9f273e9958bae499ce919c9a62"} Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.245588 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" event={"ID":"0b764289-5730-4634-97af-17dd762c64da","Type":"ContainerStarted","Data":"5d9b8970f29d49726fb8f66f6ec19a1c4c293827e86527bd910365d6561789e4"} Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.249445 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.283791 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" event={"ID":"5e3c17ca-2815-4b64-b10c-d72ffccb81ad","Type":"ContainerStarted","Data":"ea89761b23a37b0368491294e1c53942f2d87123885ae54e30b3a1db47936700"} Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.283961 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" event={"ID":"5e3c17ca-2815-4b64-b10c-d72ffccb81ad","Type":"ContainerStarted","Data":"34c63c152c1cc9d8c677d1c254304a138ff971d776ce18baa8cdc02280fd5cd4"} Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.284596 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.285286 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" podStartSLOduration=1.28527162 podStartE2EDuration="1.28527162s" podCreationTimestamp="2025-12-11 16:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:58:59.280250631 +0000 UTC m=+283.083780658" watchObservedRunningTime="2025-12-11 16:58:59.28527162 +0000 UTC m=+283.088801637" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.297905 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fmrcf"] Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.302382 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" event={"ID":"94785b3a-cb7c-426f-ab27-b74c298f40f2","Type":"ContainerDied","Data":"3992e041d562bb7e18ffe83af590bd174a44ceec6acb7df7202e0753f5fad166"} Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.302438 5129 scope.go:117] "RemoveContainer" containerID="7c3d6ffc49391070c1e2f5c389a49688d9ef021c6aba9f7fd9a1ff87f2a816c3" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.303954 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-fmrcf"] Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.304100 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.314775 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" podStartSLOduration=2.314755884 podStartE2EDuration="2.314755884s" podCreationTimestamp="2025-12-11 16:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:58:59.313687823 +0000 UTC m=+283.117217830" watchObservedRunningTime="2025-12-11 16:58:59.314755884 +0000 UTC m=+283.118285891" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.331729 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl"] Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.332625 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jfgtl"] Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.543156 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.563494 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fx8mc"] Dec 11 16:58:59 crc kubenswrapper[5129]: I1211 16:58:59.705061 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.250025 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nstvt"] Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.253789 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.257039 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.268055 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nstvt"] Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.309824 5129 generic.go:358] "Generic (PLEG): container finished" podID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerID="0f6c81b6c58e37d59a87f62527a545db9127331bc952a243dc3f369eb1c63abd" exitCode=0 Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.310138 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hlcw" event={"ID":"3b8f5017-2bba-4282-afb7-a8728ec2a378","Type":"ContainerDied","Data":"0f6c81b6c58e37d59a87f62527a545db9127331bc952a243dc3f369eb1c63abd"} Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.310248 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hlcw" event={"ID":"3b8f5017-2bba-4282-afb7-a8728ec2a378","Type":"ContainerStarted","Data":"07d6b227155851c3f886391ed32ea13c4e22f3b8eb96955989a4cce7b007484b"} Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.312184 5129 generic.go:358] "Generic (PLEG): container finished" podID="ab0a7b94-dc15-4b07-b413-f20fde6d7a72" containerID="d1333f11044c8c8cc53749352ee4cc0774b754204ba7658da31eb020f3cb7c0f" exitCode=0 Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.312526 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx8mc" event={"ID":"ab0a7b94-dc15-4b07-b413-f20fde6d7a72","Type":"ContainerDied","Data":"d1333f11044c8c8cc53749352ee4cc0774b754204ba7658da31eb020f3cb7c0f"} Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.312555 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx8mc" event={"ID":"ab0a7b94-dc15-4b07-b413-f20fde6d7a72","Type":"ContainerStarted","Data":"e952049ce395038da2d25739f74482ee24a1dc982550bc21e4491879bfc647c5"} Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.429898 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltmpm\" (UniqueName: \"kubernetes.io/projected/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-kube-api-access-ltmpm\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.429971 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-catalog-content\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.430029 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-utilities\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.531036 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ltmpm\" (UniqueName: \"kubernetes.io/projected/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-kube-api-access-ltmpm\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.531138 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-catalog-content\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.531230 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-utilities\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.532342 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-utilities\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.533127 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-catalog-content\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.535501 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94785b3a-cb7c-426f-ab27-b74c298f40f2" path="/var/lib/kubelet/pods/94785b3a-cb7c-426f-ab27-b74c298f40f2/volumes" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.536825 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da272c91-3742-497e-b116-40d44d676527" path="/var/lib/kubelet/pods/da272c91-3742-497e-b116-40d44d676527/volumes" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.566632 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltmpm\" (UniqueName: \"kubernetes.io/projected/eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa-kube-api-access-ltmpm\") pod \"certified-operators-nstvt\" (UID: \"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa\") " pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.573202 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.592183 5129 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 16:59:00 crc kubenswrapper[5129]: I1211 16:59:00.828734 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nstvt"] Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.319096 5129 generic.go:358] "Generic (PLEG): container finished" podID="ab0a7b94-dc15-4b07-b413-f20fde6d7a72" containerID="c1c166c8f9f968b1f56e64db01583d93234c63632a4a4b4a1969cf24448b7aaa" exitCode=0 Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.319166 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx8mc" event={"ID":"ab0a7b94-dc15-4b07-b413-f20fde6d7a72","Type":"ContainerDied","Data":"c1c166c8f9f968b1f56e64db01583d93234c63632a4a4b4a1969cf24448b7aaa"} Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.320625 5129 generic.go:358] "Generic (PLEG): container finished" podID="eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa" containerID="7c67d83ae93c17a98c6026c42fb2184fe47d7989a5e9fa8199478faf6b1142b9" exitCode=0 Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.320786 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nstvt" event={"ID":"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa","Type":"ContainerDied","Data":"7c67d83ae93c17a98c6026c42fb2184fe47d7989a5e9fa8199478faf6b1142b9"} Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.321326 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nstvt" event={"ID":"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa","Type":"ContainerStarted","Data":"746fbc55109e6b8fe19214e3df041d6f8f607061e8fa4b1ea3b4d19c8a35ca5e"} Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.325287 5129 generic.go:358] "Generic (PLEG): container finished" podID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerID="89658c39c18c3e74dde3abb21d5fd8fde8e5f048b4f4cf1bba895d3455b3f4ae" exitCode=0 Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.325697 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hlcw" event={"ID":"3b8f5017-2bba-4282-afb7-a8728ec2a378","Type":"ContainerDied","Data":"89658c39c18c3e74dde3abb21d5fd8fde8e5f048b4f4cf1bba895d3455b3f4ae"} Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.430743 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-scl98"] Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.442755 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scl98"] Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.442913 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.444871 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.545555 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4bg\" (UniqueName: \"kubernetes.io/projected/bfd4b34e-99f8-42b9-b195-a5383febb2e0-kube-api-access-8q4bg\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.545988 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd4b34e-99f8-42b9-b195-a5383febb2e0-utilities\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.546038 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd4b34e-99f8-42b9-b195-a5383febb2e0-catalog-content\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.647110 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd4b34e-99f8-42b9-b195-a5383febb2e0-utilities\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.647193 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd4b34e-99f8-42b9-b195-a5383febb2e0-catalog-content\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.647230 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8q4bg\" (UniqueName: \"kubernetes.io/projected/bfd4b34e-99f8-42b9-b195-a5383febb2e0-kube-api-access-8q4bg\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.647790 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd4b34e-99f8-42b9-b195-a5383febb2e0-utilities\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.647944 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd4b34e-99f8-42b9-b195-a5383febb2e0-catalog-content\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.666470 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q4bg\" (UniqueName: \"kubernetes.io/projected/bfd4b34e-99f8-42b9-b195-a5383febb2e0-kube-api-access-8q4bg\") pod \"redhat-operators-scl98\" (UID: \"bfd4b34e-99f8-42b9-b195-a5383febb2e0\") " pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.763597 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:01 crc kubenswrapper[5129]: I1211 16:59:01.976083 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scl98"] Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.331951 5129 generic.go:358] "Generic (PLEG): container finished" podID="bfd4b34e-99f8-42b9-b195-a5383febb2e0" containerID="becd4f2a794ab7e4207a35b8c9c49f5c75cc9da085c56a623c37074794e6728f" exitCode=0 Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.332004 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scl98" event={"ID":"bfd4b34e-99f8-42b9-b195-a5383febb2e0","Type":"ContainerDied","Data":"becd4f2a794ab7e4207a35b8c9c49f5c75cc9da085c56a623c37074794e6728f"} Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.332058 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scl98" event={"ID":"bfd4b34e-99f8-42b9-b195-a5383febb2e0","Type":"ContainerStarted","Data":"d6b5dafa3730ba3f1d68e246a42352bc4efee1c77e5cc84b49dd5f4ff17647b5"} Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.333806 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fx8mc" event={"ID":"ab0a7b94-dc15-4b07-b413-f20fde6d7a72","Type":"ContainerStarted","Data":"52ede1bd33831747f5b84f50afea1b29c813897a208eadd608df697bd1cb1e58"} Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.335821 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nstvt" event={"ID":"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa","Type":"ContainerStarted","Data":"b40fe2117171d5fbf2fb2fdcec4a3bfc7eec1de08ff6ccc69b11ab3fb75a6b96"} Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.338420 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hlcw" event={"ID":"3b8f5017-2bba-4282-afb7-a8728ec2a378","Type":"ContainerStarted","Data":"0ff8eeb6a2322cd3466dd21372ffd8b50b455159d3aa8b15349823ce1f0b5298"} Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.370798 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5hlcw" podStartSLOduration=3.7931470689999998 podStartE2EDuration="4.370782327s" podCreationTimestamp="2025-12-11 16:58:58 +0000 UTC" firstStartedPulling="2025-12-11 16:59:00.310909915 +0000 UTC m=+284.114439932" lastFinishedPulling="2025-12-11 16:59:00.888545173 +0000 UTC m=+284.692075190" observedRunningTime="2025-12-11 16:59:02.369053536 +0000 UTC m=+286.172583543" watchObservedRunningTime="2025-12-11 16:59:02.370782327 +0000 UTC m=+286.174312344" Dec 11 16:59:02 crc kubenswrapper[5129]: I1211 16:59:02.404605 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fx8mc" podStartSLOduration=3.879876371 podStartE2EDuration="4.40458672s" podCreationTimestamp="2025-12-11 16:58:58 +0000 UTC" firstStartedPulling="2025-12-11 16:59:00.313389708 +0000 UTC m=+284.116919725" lastFinishedPulling="2025-12-11 16:59:00.838100037 +0000 UTC m=+284.641630074" observedRunningTime="2025-12-11 16:59:02.402156398 +0000 UTC m=+286.205686405" watchObservedRunningTime="2025-12-11 16:59:02.40458672 +0000 UTC m=+286.208116737" Dec 11 16:59:03 crc kubenswrapper[5129]: I1211 16:59:03.352615 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scl98" event={"ID":"bfd4b34e-99f8-42b9-b195-a5383febb2e0","Type":"ContainerStarted","Data":"e67888a4284088ee076eb9bdde1dc8b045d703d5e2ee1a6285923237e898aa48"} Dec 11 16:59:03 crc kubenswrapper[5129]: I1211 16:59:03.354300 5129 generic.go:358] "Generic (PLEG): container finished" podID="eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa" containerID="b40fe2117171d5fbf2fb2fdcec4a3bfc7eec1de08ff6ccc69b11ab3fb75a6b96" exitCode=0 Dec 11 16:59:03 crc kubenswrapper[5129]: I1211 16:59:03.354348 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nstvt" event={"ID":"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa","Type":"ContainerDied","Data":"b40fe2117171d5fbf2fb2fdcec4a3bfc7eec1de08ff6ccc69b11ab3fb75a6b96"} Dec 11 16:59:04 crc kubenswrapper[5129]: I1211 16:59:04.361680 5129 generic.go:358] "Generic (PLEG): container finished" podID="bfd4b34e-99f8-42b9-b195-a5383febb2e0" containerID="e67888a4284088ee076eb9bdde1dc8b045d703d5e2ee1a6285923237e898aa48" exitCode=0 Dec 11 16:59:04 crc kubenswrapper[5129]: I1211 16:59:04.361800 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scl98" event={"ID":"bfd4b34e-99f8-42b9-b195-a5383febb2e0","Type":"ContainerDied","Data":"e67888a4284088ee076eb9bdde1dc8b045d703d5e2ee1a6285923237e898aa48"} Dec 11 16:59:04 crc kubenswrapper[5129]: I1211 16:59:04.366691 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nstvt" event={"ID":"eb0b59e9-46fe-4a1e-b2ed-d49ef89fdbfa","Type":"ContainerStarted","Data":"aeae076db08680a8e0f09f976cce7ecf5e5f25047a7ecf660b26bf7d3698b87b"} Dec 11 16:59:04 crc kubenswrapper[5129]: I1211 16:59:04.421808 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nstvt" podStartSLOduration=3.673654183 podStartE2EDuration="4.421792727s" podCreationTimestamp="2025-12-11 16:59:00 +0000 UTC" firstStartedPulling="2025-12-11 16:59:01.321821642 +0000 UTC m=+285.125351689" lastFinishedPulling="2025-12-11 16:59:02.069960216 +0000 UTC m=+285.873490233" observedRunningTime="2025-12-11 16:59:04.419795898 +0000 UTC m=+288.223325915" watchObservedRunningTime="2025-12-11 16:59:04.421792727 +0000 UTC m=+288.225322744" Dec 11 16:59:05 crc kubenswrapper[5129]: I1211 16:59:05.379754 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scl98" event={"ID":"bfd4b34e-99f8-42b9-b195-a5383febb2e0","Type":"ContainerStarted","Data":"689aee29f85b42aad420d90d62872a5483c7866ea59ae5b776ba6fb469e0384a"} Dec 11 16:59:05 crc kubenswrapper[5129]: I1211 16:59:05.399556 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-scl98" podStartSLOduration=3.750878376 podStartE2EDuration="4.399535961s" podCreationTimestamp="2025-12-11 16:59:01 +0000 UTC" firstStartedPulling="2025-12-11 16:59:02.332616716 +0000 UTC m=+286.136146723" lastFinishedPulling="2025-12-11 16:59:02.981274271 +0000 UTC m=+286.784804308" observedRunningTime="2025-12-11 16:59:05.396320876 +0000 UTC m=+289.199850913" watchObservedRunningTime="2025-12-11 16:59:05.399535961 +0000 UTC m=+289.203065978" Dec 11 16:59:06 crc kubenswrapper[5129]: I1211 16:59:06.866564 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-668874fb99-m6gdp"] Dec 11 16:59:06 crc kubenswrapper[5129]: I1211 16:59:06.866816 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" podUID="5e3c17ca-2815-4b64-b10c-d72ffccb81ad" containerName="controller-manager" containerID="cri-o://ea89761b23a37b0368491294e1c53942f2d87123885ae54e30b3a1db47936700" gracePeriod=30 Dec 11 16:59:06 crc kubenswrapper[5129]: I1211 16:59:06.880288 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2"] Dec 11 16:59:06 crc kubenswrapper[5129]: I1211 16:59:06.880626 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" podUID="0b764289-5730-4634-97af-17dd762c64da" containerName="route-controller-manager" containerID="cri-o://a99f9a81432b25b62ab46639622e9f6f5447fd9f273e9958bae499ce919c9a62" gracePeriod=30 Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.392654 5129 generic.go:358] "Generic (PLEG): container finished" podID="5e3c17ca-2815-4b64-b10c-d72ffccb81ad" containerID="ea89761b23a37b0368491294e1c53942f2d87123885ae54e30b3a1db47936700" exitCode=0 Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.392740 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" event={"ID":"5e3c17ca-2815-4b64-b10c-d72ffccb81ad","Type":"ContainerDied","Data":"ea89761b23a37b0368491294e1c53942f2d87123885ae54e30b3a1db47936700"} Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.394656 5129 generic.go:358] "Generic (PLEG): container finished" podID="0b764289-5730-4634-97af-17dd762c64da" containerID="a99f9a81432b25b62ab46639622e9f6f5447fd9f273e9958bae499ce919c9a62" exitCode=0 Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.394688 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" event={"ID":"0b764289-5730-4634-97af-17dd762c64da","Type":"ContainerDied","Data":"a99f9a81432b25b62ab46639622e9f6f5447fd9f273e9958bae499ce919c9a62"} Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.843682 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.886317 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd"] Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.886897 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0b764289-5730-4634-97af-17dd762c64da" containerName="route-controller-manager" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.886916 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b764289-5730-4634-97af-17dd762c64da" containerName="route-controller-manager" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.887026 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="0b764289-5730-4634-97af-17dd762c64da" containerName="route-controller-manager" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.891413 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.917566 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b764289-5730-4634-97af-17dd762c64da-tmp\") pod \"0b764289-5730-4634-97af-17dd762c64da\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.917670 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-client-ca\") pod \"0b764289-5730-4634-97af-17dd762c64da\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.917736 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sm28\" (UniqueName: \"kubernetes.io/projected/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-kube-api-access-5sm28\") pod \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.917809 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tqjx\" (UniqueName: \"kubernetes.io/projected/0b764289-5730-4634-97af-17dd762c64da-kube-api-access-7tqjx\") pod \"0b764289-5730-4634-97af-17dd762c64da\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.917914 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-config\") pod \"0b764289-5730-4634-97af-17dd762c64da\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.917969 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-config\") pod \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918030 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-client-ca\") pod \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918066 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-proxy-ca-bundles\") pod \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918130 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-serving-cert\") pod \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918157 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b764289-5730-4634-97af-17dd762c64da-tmp" (OuterVolumeSpecName: "tmp") pod "0b764289-5730-4634-97af-17dd762c64da" (UID: "0b764289-5730-4634-97af-17dd762c64da"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918218 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b764289-5730-4634-97af-17dd762c64da-serving-cert\") pod \"0b764289-5730-4634-97af-17dd762c64da\" (UID: \"0b764289-5730-4634-97af-17dd762c64da\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918677 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-tmp\") pod \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\" (UID: \"5e3c17ca-2815-4b64-b10c-d72ffccb81ad\") " Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918417 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b764289-5730-4634-97af-17dd762c64da" (UID: "0b764289-5730-4634-97af-17dd762c64da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918685 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e3c17ca-2815-4b64-b10c-d72ffccb81ad" (UID: "5e3c17ca-2815-4b64-b10c-d72ffccb81ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.918855 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-tmp" (OuterVolumeSpecName: "tmp") pod "5e3c17ca-2815-4b64-b10c-d72ffccb81ad" (UID: "5e3c17ca-2815-4b64-b10c-d72ffccb81ad"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.919094 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.919105 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b764289-5730-4634-97af-17dd762c64da-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.919114 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.919124 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.919161 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5e3c17ca-2815-4b64-b10c-d72ffccb81ad" (UID: "5e3c17ca-2815-4b64-b10c-d72ffccb81ad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.919270 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-config" (OuterVolumeSpecName: "config") pod "5e3c17ca-2815-4b64-b10c-d72ffccb81ad" (UID: "5e3c17ca-2815-4b64-b10c-d72ffccb81ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.919611 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-config" (OuterVolumeSpecName: "config") pod "0b764289-5730-4634-97af-17dd762c64da" (UID: "0b764289-5730-4634-97af-17dd762c64da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.933229 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-kube-api-access-5sm28" (OuterVolumeSpecName: "kube-api-access-5sm28") pod "5e3c17ca-2815-4b64-b10c-d72ffccb81ad" (UID: "5e3c17ca-2815-4b64-b10c-d72ffccb81ad"). InnerVolumeSpecName "kube-api-access-5sm28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.936877 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e3c17ca-2815-4b64-b10c-d72ffccb81ad" (UID: "5e3c17ca-2815-4b64-b10c-d72ffccb81ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.936994 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b764289-5730-4634-97af-17dd762c64da-kube-api-access-7tqjx" (OuterVolumeSpecName: "kube-api-access-7tqjx") pod "0b764289-5730-4634-97af-17dd762c64da" (UID: "0b764289-5730-4634-97af-17dd762c64da"). InnerVolumeSpecName "kube-api-access-7tqjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:59:07 crc kubenswrapper[5129]: I1211 16:59:07.937666 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b764289-5730-4634-97af-17dd762c64da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b764289-5730-4634-97af-17dd762c64da" (UID: "0b764289-5730-4634-97af-17dd762c64da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.020058 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5sm28\" (UniqueName: \"kubernetes.io/projected/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-kube-api-access-5sm28\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.020086 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7tqjx\" (UniqueName: \"kubernetes.io/projected/0b764289-5730-4634-97af-17dd762c64da-kube-api-access-7tqjx\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.020095 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b764289-5730-4634-97af-17dd762c64da-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.020106 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.020116 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.020124 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3c17ca-2815-4b64-b10c-d72ffccb81ad-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.020135 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b764289-5730-4634-97af-17dd762c64da-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.133251 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd"] Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.133314 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-p5lwg"] Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.133337 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.133865 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e3c17ca-2815-4b64-b10c-d72ffccb81ad" containerName="controller-manager" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.133886 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3c17ca-2815-4b64-b10c-d72ffccb81ad" containerName="controller-manager" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.133990 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e3c17ca-2815-4b64-b10c-d72ffccb81ad" containerName="controller-manager" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.156555 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-p5lwg"] Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.156777 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222276 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-proxy-ca-bundles\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222331 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-client-ca\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222377 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8zb\" (UniqueName: \"kubernetes.io/projected/a0c01b31-aadf-4115-9a72-cdb687f8a70a-kube-api-access-rs8zb\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222423 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c01b31-aadf-4115-9a72-cdb687f8a70a-serving-cert\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222456 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0c01b31-aadf-4115-9a72-cdb687f8a70a-tmp\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222527 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-config\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222567 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-config\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222614 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzwk\" (UniqueName: \"kubernetes.io/projected/5ee1c49f-73d9-487f-ba2f-94ad553307e9-kube-api-access-5wzwk\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222634 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ee1c49f-73d9-487f-ba2f-94ad553307e9-tmp\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222673 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-client-ca\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.222701 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee1c49f-73d9-487f-ba2f-94ad553307e9-serving-cert\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.323387 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-proxy-ca-bundles\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.323664 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-client-ca\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.323871 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8zb\" (UniqueName: \"kubernetes.io/projected/a0c01b31-aadf-4115-9a72-cdb687f8a70a-kube-api-access-rs8zb\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.323950 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c01b31-aadf-4115-9a72-cdb687f8a70a-serving-cert\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324025 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0c01b31-aadf-4115-9a72-cdb687f8a70a-tmp\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324102 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-config\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324198 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-config\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324313 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzwk\" (UniqueName: \"kubernetes.io/projected/5ee1c49f-73d9-487f-ba2f-94ad553307e9-kube-api-access-5wzwk\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324367 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ee1c49f-73d9-487f-ba2f-94ad553307e9-tmp\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324575 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-client-ca\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324627 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee1c49f-73d9-487f-ba2f-94ad553307e9-serving-cert\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.324953 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-client-ca\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.325089 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0c01b31-aadf-4115-9a72-cdb687f8a70a-tmp\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.325340 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-proxy-ca-bundles\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.325358 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ee1c49f-73d9-487f-ba2f-94ad553307e9-tmp\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.325958 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-config\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.326081 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-config\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.326189 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-client-ca\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.331244 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c01b31-aadf-4115-9a72-cdb687f8a70a-serving-cert\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.338424 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee1c49f-73d9-487f-ba2f-94ad553307e9-serving-cert\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.355530 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8zb\" (UniqueName: \"kubernetes.io/projected/a0c01b31-aadf-4115-9a72-cdb687f8a70a-kube-api-access-rs8zb\") pod \"controller-manager-56686966c9-p5lwg\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.355888 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzwk\" (UniqueName: \"kubernetes.io/projected/5ee1c49f-73d9-487f-ba2f-94ad553307e9-kube-api-access-5wzwk\") pod \"route-controller-manager-588cf57d4c-4nbzd\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.404937 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" event={"ID":"0b764289-5730-4634-97af-17dd762c64da","Type":"ContainerDied","Data":"5d9b8970f29d49726fb8f66f6ec19a1c4c293827e86527bd910365d6561789e4"} Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.405047 5129 scope.go:117] "RemoveContainer" containerID="a99f9a81432b25b62ab46639622e9f6f5447fd9f273e9958bae499ce919c9a62" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.405144 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.408912 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" event={"ID":"5e3c17ca-2815-4b64-b10c-d72ffccb81ad","Type":"ContainerDied","Data":"34c63c152c1cc9d8c677d1c254304a138ff971d776ce18baa8cdc02280fd5cd4"} Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.409050 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-668874fb99-m6gdp" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.426111 5129 scope.go:117] "RemoveContainer" containerID="ea89761b23a37b0368491294e1c53942f2d87123885ae54e30b3a1db47936700" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.449856 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.455838 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-668874fb99-m6gdp"] Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.460101 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-668874fb99-m6gdp"] Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.472972 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.481865 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2"] Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.484443 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58fcbccb67-dwbh2"] Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.529823 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b764289-5730-4634-97af-17dd762c64da" path="/var/lib/kubelet/pods/0b764289-5730-4634-97af-17dd762c64da/volumes" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.530351 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e3c17ca-2815-4b64-b10c-d72ffccb81ad" path="/var/lib/kubelet/pods/5e3c17ca-2815-4b64-b10c-d72ffccb81ad/volumes" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.724208 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-p5lwg"] Dec 11 16:59:08 crc kubenswrapper[5129]: W1211 16:59:08.732673 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c01b31_aadf_4115_9a72_cdb687f8a70a.slice/crio-13c3fdfd32bb3ef4310275d1dc0cefd5b3a1845b6080581ca6c82f544d26488b WatchSource:0}: Error finding container 13c3fdfd32bb3ef4310275d1dc0cefd5b3a1845b6080581ca6c82f544d26488b: Status 404 returned error can't find the container with id 13c3fdfd32bb3ef4310275d1dc0cefd5b3a1845b6080581ca6c82f544d26488b Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.875809 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd"] Dec 11 16:59:08 crc kubenswrapper[5129]: W1211 16:59:08.888495 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1c49f_73d9_487f_ba2f_94ad553307e9.slice/crio-433010ccddf7eacd235342eeaf10098b933142bf9acd93b2124cf5bf536a2ddf WatchSource:0}: Error finding container 433010ccddf7eacd235342eeaf10098b933142bf9acd93b2124cf5bf536a2ddf: Status 404 returned error can't find the container with id 433010ccddf7eacd235342eeaf10098b933142bf9acd93b2124cf5bf536a2ddf Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.946869 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.947538 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.947666 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.965476 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:59:08 crc kubenswrapper[5129]: I1211 16:59:08.965674 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.002599 5129 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eed0d8912372b478231534e18058ad24e8107a1a4294de3b20010b63410430cf"} pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.002666 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" containerID="cri-o://eed0d8912372b478231534e18058ad24e8107a1a4294de3b20010b63410430cf" gracePeriod=600 Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.002797 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.149598 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.149789 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.197972 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.418025 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" event={"ID":"5ee1c49f-73d9-487f-ba2f-94ad553307e9","Type":"ContainerStarted","Data":"28a07cfd9edc06e5929e2397f699d23b1d456f5ba707a869786ed4d695b18203"} Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.418328 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" event={"ID":"5ee1c49f-73d9-487f-ba2f-94ad553307e9","Type":"ContainerStarted","Data":"433010ccddf7eacd235342eeaf10098b933142bf9acd93b2124cf5bf536a2ddf"} Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.421010 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.428640 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" event={"ID":"a0c01b31-aadf-4115-9a72-cdb687f8a70a","Type":"ContainerStarted","Data":"ba87c4a8a0a5e814818b404929eb9db675d21cdbb1c3d1d1067f56414f2e57fb"} Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.428716 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.428747 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" event={"ID":"a0c01b31-aadf-4115-9a72-cdb687f8a70a","Type":"ContainerStarted","Data":"13c3fdfd32bb3ef4310275d1dc0cefd5b3a1845b6080581ca6c82f544d26488b"} Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.443577 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" podStartSLOduration=3.443554002 podStartE2EDuration="3.443554002s" podCreationTimestamp="2025-12-11 16:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:59:09.442408738 +0000 UTC m=+293.245938765" watchObservedRunningTime="2025-12-11 16:59:09.443554002 +0000 UTC m=+293.247084039" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.470561 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" podStartSLOduration=3.470537642 podStartE2EDuration="3.470537642s" podCreationTimestamp="2025-12-11 16:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:59:09.468237084 +0000 UTC m=+293.271767131" watchObservedRunningTime="2025-12-11 16:59:09.470537642 +0000 UTC m=+293.274067669" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.477163 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fx8mc" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.495632 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.593161 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:09 crc kubenswrapper[5129]: I1211 16:59:09.788384 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:10 crc kubenswrapper[5129]: I1211 16:59:10.435807 5129 generic.go:358] "Generic (PLEG): container finished" podID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerID="eed0d8912372b478231534e18058ad24e8107a1a4294de3b20010b63410430cf" exitCode=0 Dec 11 16:59:10 crc kubenswrapper[5129]: I1211 16:59:10.435904 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerDied","Data":"eed0d8912372b478231534e18058ad24e8107a1a4294de3b20010b63410430cf"} Dec 11 16:59:10 crc kubenswrapper[5129]: I1211 16:59:10.436205 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"6dc09ad4273c6049f3cdd75c94f381f5b1081c1912d30fe7d468b4b5a0e805e7"} Dec 11 16:59:10 crc kubenswrapper[5129]: I1211 16:59:10.574175 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:10 crc kubenswrapper[5129]: I1211 16:59:10.574239 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:10 crc kubenswrapper[5129]: I1211 16:59:10.635608 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:11 crc kubenswrapper[5129]: I1211 16:59:11.490452 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nstvt" Dec 11 16:59:11 crc kubenswrapper[5129]: I1211 16:59:11.764230 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:11 crc kubenswrapper[5129]: I1211 16:59:11.764574 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:11 crc kubenswrapper[5129]: I1211 16:59:11.834153 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:12 crc kubenswrapper[5129]: I1211 16:59:12.511223 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-scl98" Dec 11 16:59:13 crc kubenswrapper[5129]: I1211 16:59:13.918494 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-p5lwg"] Dec 11 16:59:13 crc kubenswrapper[5129]: I1211 16:59:13.919185 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" podUID="a0c01b31-aadf-4115-9a72-cdb687f8a70a" containerName="controller-manager" containerID="cri-o://ba87c4a8a0a5e814818b404929eb9db675d21cdbb1c3d1d1067f56414f2e57fb" gracePeriod=30 Dec 11 16:59:13 crc kubenswrapper[5129]: I1211 16:59:13.937076 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd"] Dec 11 16:59:13 crc kubenswrapper[5129]: I1211 16:59:13.937643 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" podUID="5ee1c49f-73d9-487f-ba2f-94ad553307e9" containerName="route-controller-manager" containerID="cri-o://28a07cfd9edc06e5929e2397f699d23b1d456f5ba707a869786ed4d695b18203" gracePeriod=30 Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.469490 5129 generic.go:358] "Generic (PLEG): container finished" podID="5ee1c49f-73d9-487f-ba2f-94ad553307e9" containerID="28a07cfd9edc06e5929e2397f699d23b1d456f5ba707a869786ed4d695b18203" exitCode=0 Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.469623 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" event={"ID":"5ee1c49f-73d9-487f-ba2f-94ad553307e9","Type":"ContainerDied","Data":"28a07cfd9edc06e5929e2397f699d23b1d456f5ba707a869786ed4d695b18203"} Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.469936 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" event={"ID":"5ee1c49f-73d9-487f-ba2f-94ad553307e9","Type":"ContainerDied","Data":"433010ccddf7eacd235342eeaf10098b933142bf9acd93b2124cf5bf536a2ddf"} Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.469949 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433010ccddf7eacd235342eeaf10098b933142bf9acd93b2124cf5bf536a2ddf" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.474077 5129 generic.go:358] "Generic (PLEG): container finished" podID="a0c01b31-aadf-4115-9a72-cdb687f8a70a" containerID="ba87c4a8a0a5e814818b404929eb9db675d21cdbb1c3d1d1067f56414f2e57fb" exitCode=0 Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.474173 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" event={"ID":"a0c01b31-aadf-4115-9a72-cdb687f8a70a","Type":"ContainerDied","Data":"ba87c4a8a0a5e814818b404929eb9db675d21cdbb1c3d1d1067f56414f2e57fb"} Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.474219 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" event={"ID":"a0c01b31-aadf-4115-9a72-cdb687f8a70a","Type":"ContainerDied","Data":"13c3fdfd32bb3ef4310275d1dc0cefd5b3a1845b6080581ca6c82f544d26488b"} Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.474238 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c3fdfd32bb3ef4310275d1dc0cefd5b3a1845b6080581ca6c82f544d26488b" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.507797 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.511275 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.541745 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-785dfb659c-98fdz"] Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.542374 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0c01b31-aadf-4115-9a72-cdb687f8a70a" containerName="controller-manager" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.542387 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c01b31-aadf-4115-9a72-cdb687f8a70a" containerName="controller-manager" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.542404 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ee1c49f-73d9-487f-ba2f-94ad553307e9" containerName="route-controller-manager" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.542409 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee1c49f-73d9-487f-ba2f-94ad553307e9" containerName="route-controller-manager" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.542555 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ee1c49f-73d9-487f-ba2f-94ad553307e9" containerName="route-controller-manager" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.542569 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0c01b31-aadf-4115-9a72-cdb687f8a70a" containerName="controller-manager" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.550442 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785dfb659c-98fdz"] Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.550660 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.558595 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx"] Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.563799 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.573241 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx"] Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623545 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs8zb\" (UniqueName: \"kubernetes.io/projected/a0c01b31-aadf-4115-9a72-cdb687f8a70a-kube-api-access-rs8zb\") pod \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623613 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c01b31-aadf-4115-9a72-cdb687f8a70a-serving-cert\") pod \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623658 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ee1c49f-73d9-487f-ba2f-94ad553307e9-tmp\") pod \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623691 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee1c49f-73d9-487f-ba2f-94ad553307e9-serving-cert\") pod \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623722 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-config\") pod \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623783 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0c01b31-aadf-4115-9a72-cdb687f8a70a-tmp\") pod \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623802 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-client-ca\") pod \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623845 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-proxy-ca-bundles\") pod \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623869 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-client-ca\") pod \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623887 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wzwk\" (UniqueName: \"kubernetes.io/projected/5ee1c49f-73d9-487f-ba2f-94ad553307e9-kube-api-access-5wzwk\") pod \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\" (UID: \"5ee1c49f-73d9-487f-ba2f-94ad553307e9\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.623924 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-config\") pod \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\" (UID: \"a0c01b31-aadf-4115-9a72-cdb687f8a70a\") " Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624039 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a25d35-56b3-4721-a606-28ba3b44cb0f-serving-cert\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624070 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4dh\" (UniqueName: \"kubernetes.io/projected/14aca058-5a50-4afe-b1ec-54428d29ae14-kube-api-access-kq4dh\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624093 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-proxy-ca-bundles\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624114 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-config\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624172 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14aca058-5a50-4afe-b1ec-54428d29ae14-tmp\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624193 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29a25d35-56b3-4721-a606-28ba3b44cb0f-tmp\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624230 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-client-ca\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624255 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-client-ca\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624277 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14aca058-5a50-4afe-b1ec-54428d29ae14-serving-cert\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624338 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-config\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.624390 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468mc\" (UniqueName: \"kubernetes.io/projected/29a25d35-56b3-4721-a606-28ba3b44cb0f-kube-api-access-468mc\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.626229 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c01b31-aadf-4115-9a72-cdb687f8a70a-tmp" (OuterVolumeSpecName: "tmp") pod "a0c01b31-aadf-4115-9a72-cdb687f8a70a" (UID: "a0c01b31-aadf-4115-9a72-cdb687f8a70a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.626108 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-config" (OuterVolumeSpecName: "config") pod "a0c01b31-aadf-4115-9a72-cdb687f8a70a" (UID: "a0c01b31-aadf-4115-9a72-cdb687f8a70a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.626536 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee1c49f-73d9-487f-ba2f-94ad553307e9-tmp" (OuterVolumeSpecName: "tmp") pod "5ee1c49f-73d9-487f-ba2f-94ad553307e9" (UID: "5ee1c49f-73d9-487f-ba2f-94ad553307e9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.627002 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ee1c49f-73d9-487f-ba2f-94ad553307e9" (UID: "5ee1c49f-73d9-487f-ba2f-94ad553307e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.627439 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0c01b31-aadf-4115-9a72-cdb687f8a70a" (UID: "a0c01b31-aadf-4115-9a72-cdb687f8a70a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.627448 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0c01b31-aadf-4115-9a72-cdb687f8a70a" (UID: "a0c01b31-aadf-4115-9a72-cdb687f8a70a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.627941 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-config" (OuterVolumeSpecName: "config") pod "5ee1c49f-73d9-487f-ba2f-94ad553307e9" (UID: "5ee1c49f-73d9-487f-ba2f-94ad553307e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.629982 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c01b31-aadf-4115-9a72-cdb687f8a70a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0c01b31-aadf-4115-9a72-cdb687f8a70a" (UID: "a0c01b31-aadf-4115-9a72-cdb687f8a70a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.630023 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c01b31-aadf-4115-9a72-cdb687f8a70a-kube-api-access-rs8zb" (OuterVolumeSpecName: "kube-api-access-rs8zb") pod "a0c01b31-aadf-4115-9a72-cdb687f8a70a" (UID: "a0c01b31-aadf-4115-9a72-cdb687f8a70a"). InnerVolumeSpecName "kube-api-access-rs8zb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.630092 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ee1c49f-73d9-487f-ba2f-94ad553307e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ee1c49f-73d9-487f-ba2f-94ad553307e9" (UID: "5ee1c49f-73d9-487f-ba2f-94ad553307e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.631207 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee1c49f-73d9-487f-ba2f-94ad553307e9-kube-api-access-5wzwk" (OuterVolumeSpecName: "kube-api-access-5wzwk") pod "5ee1c49f-73d9-487f-ba2f-94ad553307e9" (UID: "5ee1c49f-73d9-487f-ba2f-94ad553307e9"). InnerVolumeSpecName "kube-api-access-5wzwk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.725723 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14aca058-5a50-4afe-b1ec-54428d29ae14-tmp\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.725768 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29a25d35-56b3-4721-a606-28ba3b44cb0f-tmp\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.725791 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-client-ca\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.725810 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-client-ca\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.725827 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14aca058-5a50-4afe-b1ec-54428d29ae14-serving-cert\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.725863 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-config\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.725892 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-468mc\" (UniqueName: \"kubernetes.io/projected/29a25d35-56b3-4721-a606-28ba3b44cb0f-kube-api-access-468mc\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.726876 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14aca058-5a50-4afe-b1ec-54428d29ae14-tmp\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727379 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a25d35-56b3-4721-a606-28ba3b44cb0f-serving-cert\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727451 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4dh\" (UniqueName: \"kubernetes.io/projected/14aca058-5a50-4afe-b1ec-54428d29ae14-kube-api-access-kq4dh\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727460 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-client-ca\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727487 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-proxy-ca-bundles\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727553 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-config\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727729 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a0c01b31-aadf-4115-9a72-cdb687f8a70a-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727750 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727762 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727772 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727783 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5wzwk\" (UniqueName: \"kubernetes.io/projected/5ee1c49f-73d9-487f-ba2f-94ad553307e9-kube-api-access-5wzwk\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727794 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c01b31-aadf-4115-9a72-cdb687f8a70a-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727806 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rs8zb\" (UniqueName: \"kubernetes.io/projected/a0c01b31-aadf-4115-9a72-cdb687f8a70a-kube-api-access-rs8zb\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727816 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c01b31-aadf-4115-9a72-cdb687f8a70a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727828 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ee1c49f-73d9-487f-ba2f-94ad553307e9-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727838 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee1c49f-73d9-487f-ba2f-94ad553307e9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.727860 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee1c49f-73d9-487f-ba2f-94ad553307e9-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.728195 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-config\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.728431 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-proxy-ca-bundles\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.728576 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29a25d35-56b3-4721-a606-28ba3b44cb0f-tmp\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.728610 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-config\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.729171 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-client-ca\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.730977 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14aca058-5a50-4afe-b1ec-54428d29ae14-serving-cert\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.731698 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a25d35-56b3-4721-a606-28ba3b44cb0f-serving-cert\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.740585 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-468mc\" (UniqueName: \"kubernetes.io/projected/29a25d35-56b3-4721-a606-28ba3b44cb0f-kube-api-access-468mc\") pod \"controller-manager-785dfb659c-98fdz\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.746045 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4dh\" (UniqueName: \"kubernetes.io/projected/14aca058-5a50-4afe-b1ec-54428d29ae14-kube-api-access-kq4dh\") pod \"route-controller-manager-858f6c755f-d7ssx\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.912777 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:15 crc kubenswrapper[5129]: I1211 16:59:15.913773 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.112636 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785dfb659c-98fdz"] Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.396487 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx"] Dec 11 16:59:16 crc kubenswrapper[5129]: W1211 16:59:16.402542 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14aca058_5a50_4afe_b1ec_54428d29ae14.slice/crio-09b469de878fdb7a10311388591278edd8ad41a8200b1e7726455abf68e8d7e5 WatchSource:0}: Error finding container 09b469de878fdb7a10311388591278edd8ad41a8200b1e7726455abf68e8d7e5: Status 404 returned error can't find the container with id 09b469de878fdb7a10311388591278edd8ad41a8200b1e7726455abf68e8d7e5 Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.480911 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" event={"ID":"29a25d35-56b3-4721-a606-28ba3b44cb0f","Type":"ContainerStarted","Data":"52f3371058d8c05db40acca6e124d5a045dc7d953e00353154c758588e1b6180"} Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.480952 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" event={"ID":"29a25d35-56b3-4721-a606-28ba3b44cb0f","Type":"ContainerStarted","Data":"f47ad7d516ca0406f5f441e66db47314937098210b39ecff05f284f88b9c693c"} Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.481118 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.481916 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" event={"ID":"14aca058-5a50-4afe-b1ec-54428d29ae14","Type":"ContainerStarted","Data":"09b469de878fdb7a10311388591278edd8ad41a8200b1e7726455abf68e8d7e5"} Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.482045 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56686966c9-p5lwg" Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.482253 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd" Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.510731 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" podStartSLOduration=3.510599807 podStartE2EDuration="3.510599807s" podCreationTimestamp="2025-12-11 16:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:59:16.499215039 +0000 UTC m=+300.302745056" watchObservedRunningTime="2025-12-11 16:59:16.510599807 +0000 UTC m=+300.314129824" Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.528854 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-p5lwg"] Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.531613 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-p5lwg"] Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.540198 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd"] Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.549707 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-4nbzd"] Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.708832 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:59:16 crc kubenswrapper[5129]: I1211 16:59:16.709253 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:59:17 crc kubenswrapper[5129]: I1211 16:59:17.379920 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:17 crc kubenswrapper[5129]: I1211 16:59:17.488688 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" event={"ID":"14aca058-5a50-4afe-b1ec-54428d29ae14","Type":"ContainerStarted","Data":"25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e"} Dec 11 16:59:17 crc kubenswrapper[5129]: I1211 16:59:17.488937 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:17 crc kubenswrapper[5129]: I1211 16:59:17.493501 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:17 crc kubenswrapper[5129]: I1211 16:59:17.504917 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" podStartSLOduration=4.504899702 podStartE2EDuration="4.504899702s" podCreationTimestamp="2025-12-11 16:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:59:17.503318815 +0000 UTC m=+301.306848852" watchObservedRunningTime="2025-12-11 16:59:17.504899702 +0000 UTC m=+301.308429719" Dec 11 16:59:18 crc kubenswrapper[5129]: I1211 16:59:18.527327 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee1c49f-73d9-487f-ba2f-94ad553307e9" path="/var/lib/kubelet/pods/5ee1c49f-73d9-487f-ba2f-94ad553307e9/volumes" Dec 11 16:59:18 crc kubenswrapper[5129]: I1211 16:59:18.528781 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c01b31-aadf-4115-9a72-cdb687f8a70a" path="/var/lib/kubelet/pods/a0c01b31-aadf-4115-9a72-cdb687f8a70a/volumes" Dec 11 16:59:19 crc kubenswrapper[5129]: E1211 16:59:19.611772 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1c49f_73d9_487f_ba2f_94ad553307e9.slice\": RecentStats: unable to find data in memory cache]" Dec 11 16:59:29 crc kubenswrapper[5129]: E1211 16:59:29.757463 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1c49f_73d9_487f_ba2f_94ad553307e9.slice\": RecentStats: unable to find data in memory cache]" Dec 11 16:59:37 crc kubenswrapper[5129]: I1211 16:59:37.974751 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785dfb659c-98fdz"] Dec 11 16:59:37 crc kubenswrapper[5129]: I1211 16:59:37.975650 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" podUID="29a25d35-56b3-4721-a606-28ba3b44cb0f" containerName="controller-manager" containerID="cri-o://52f3371058d8c05db40acca6e124d5a045dc7d953e00353154c758588e1b6180" gracePeriod=30 Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.609859 5129 generic.go:358] "Generic (PLEG): container finished" podID="29a25d35-56b3-4721-a606-28ba3b44cb0f" containerID="52f3371058d8c05db40acca6e124d5a045dc7d953e00353154c758588e1b6180" exitCode=0 Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.609961 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" event={"ID":"29a25d35-56b3-4721-a606-28ba3b44cb0f","Type":"ContainerDied","Data":"52f3371058d8c05db40acca6e124d5a045dc7d953e00353154c758588e1b6180"} Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.716806 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.756357 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-plzl5"] Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.757634 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29a25d35-56b3-4721-a606-28ba3b44cb0f" containerName="controller-manager" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.757671 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a25d35-56b3-4721-a606-28ba3b44cb0f" containerName="controller-manager" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.757865 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="29a25d35-56b3-4721-a606-28ba3b44cb0f" containerName="controller-manager" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.765206 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.773138 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-468mc\" (UniqueName: \"kubernetes.io/projected/29a25d35-56b3-4721-a606-28ba3b44cb0f-kube-api-access-468mc\") pod \"29a25d35-56b3-4721-a606-28ba3b44cb0f\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.773231 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-client-ca\") pod \"29a25d35-56b3-4721-a606-28ba3b44cb0f\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.773261 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-proxy-ca-bundles\") pod \"29a25d35-56b3-4721-a606-28ba3b44cb0f\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.773546 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a25d35-56b3-4721-a606-28ba3b44cb0f-serving-cert\") pod \"29a25d35-56b3-4721-a606-28ba3b44cb0f\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.773673 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-config\") pod \"29a25d35-56b3-4721-a606-28ba3b44cb0f\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.773771 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29a25d35-56b3-4721-a606-28ba3b44cb0f-tmp\") pod \"29a25d35-56b3-4721-a606-28ba3b44cb0f\" (UID: \"29a25d35-56b3-4721-a606-28ba3b44cb0f\") " Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.774164 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "29a25d35-56b3-4721-a606-28ba3b44cb0f" (UID: "29a25d35-56b3-4721-a606-28ba3b44cb0f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.774275 5129 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.774264 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "29a25d35-56b3-4721-a606-28ba3b44cb0f" (UID: "29a25d35-56b3-4721-a606-28ba3b44cb0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.774469 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a25d35-56b3-4721-a606-28ba3b44cb0f-tmp" (OuterVolumeSpecName: "tmp") pod "29a25d35-56b3-4721-a606-28ba3b44cb0f" (UID: "29a25d35-56b3-4721-a606-28ba3b44cb0f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.774788 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-config" (OuterVolumeSpecName: "config") pod "29a25d35-56b3-4721-a606-28ba3b44cb0f" (UID: "29a25d35-56b3-4721-a606-28ba3b44cb0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.779413 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a25d35-56b3-4721-a606-28ba3b44cb0f-kube-api-access-468mc" (OuterVolumeSpecName: "kube-api-access-468mc") pod "29a25d35-56b3-4721-a606-28ba3b44cb0f" (UID: "29a25d35-56b3-4721-a606-28ba3b44cb0f"). InnerVolumeSpecName "kube-api-access-468mc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.779923 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a25d35-56b3-4721-a606-28ba3b44cb0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "29a25d35-56b3-4721-a606-28ba3b44cb0f" (UID: "29a25d35-56b3-4721-a606-28ba3b44cb0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.791161 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-plzl5"] Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875078 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38a25314-b14b-4016-9ecb-1e4c220af250-tmp\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875174 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnwbq\" (UniqueName: \"kubernetes.io/projected/38a25314-b14b-4016-9ecb-1e4c220af250-kube-api-access-dnwbq\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875213 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-client-ca\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875268 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-proxy-ca-bundles\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875359 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-config\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875479 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a25314-b14b-4016-9ecb-1e4c220af250-serving-cert\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875543 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875560 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a25d35-56b3-4721-a606-28ba3b44cb0f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875572 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a25d35-56b3-4721-a606-28ba3b44cb0f-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875584 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29a25d35-56b3-4721-a606-28ba3b44cb0f-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.875596 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-468mc\" (UniqueName: \"kubernetes.io/projected/29a25d35-56b3-4721-a606-28ba3b44cb0f-kube-api-access-468mc\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.976972 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dnwbq\" (UniqueName: \"kubernetes.io/projected/38a25314-b14b-4016-9ecb-1e4c220af250-kube-api-access-dnwbq\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.977019 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-client-ca\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.977052 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-proxy-ca-bundles\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.977099 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-config\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.977126 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a25314-b14b-4016-9ecb-1e4c220af250-serving-cert\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.977154 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38a25314-b14b-4016-9ecb-1e4c220af250-tmp\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.977875 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38a25314-b14b-4016-9ecb-1e4c220af250-tmp\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.978333 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-client-ca\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.978660 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-proxy-ca-bundles\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.979441 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a25314-b14b-4016-9ecb-1e4c220af250-config\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:38 crc kubenswrapper[5129]: I1211 16:59:38.985354 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38a25314-b14b-4016-9ecb-1e4c220af250-serving-cert\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.007769 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnwbq\" (UniqueName: \"kubernetes.io/projected/38a25314-b14b-4016-9ecb-1e4c220af250-kube-api-access-dnwbq\") pod \"controller-manager-56686966c9-plzl5\" (UID: \"38a25314-b14b-4016-9ecb-1e4c220af250\") " pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.109080 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.506936 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56686966c9-plzl5"] Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.514467 5129 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.617806 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" event={"ID":"29a25d35-56b3-4721-a606-28ba3b44cb0f","Type":"ContainerDied","Data":"f47ad7d516ca0406f5f441e66db47314937098210b39ecff05f284f88b9c693c"} Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.617871 5129 scope.go:117] "RemoveContainer" containerID="52f3371058d8c05db40acca6e124d5a045dc7d953e00353154c758588e1b6180" Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.617831 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785dfb659c-98fdz" Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.621127 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" event={"ID":"38a25314-b14b-4016-9ecb-1e4c220af250","Type":"ContainerStarted","Data":"93933c8f2450ff8e6158f77b4abd70ff135d7c94baffc26b19c82e7512ef1b2e"} Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.650845 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785dfb659c-98fdz"] Dec 11 16:59:39 crc kubenswrapper[5129]: I1211 16:59:39.656853 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-785dfb659c-98fdz"] Dec 11 16:59:39 crc kubenswrapper[5129]: E1211 16:59:39.877833 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1c49f_73d9_487f_ba2f_94ad553307e9.slice\": RecentStats: unable to find data in memory cache]" Dec 11 16:59:40 crc kubenswrapper[5129]: I1211 16:59:40.526833 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a25d35-56b3-4721-a606-28ba3b44cb0f" path="/var/lib/kubelet/pods/29a25d35-56b3-4721-a606-28ba3b44cb0f/volumes" Dec 11 16:59:40 crc kubenswrapper[5129]: I1211 16:59:40.628563 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" event={"ID":"38a25314-b14b-4016-9ecb-1e4c220af250","Type":"ContainerStarted","Data":"8be6689495f23a4c3b772221e4a426bd838c462655534ca28d9657ff2cd71c38"} Dec 11 16:59:40 crc kubenswrapper[5129]: I1211 16:59:40.628618 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:40 crc kubenswrapper[5129]: I1211 16:59:40.633974 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" Dec 11 16:59:40 crc kubenswrapper[5129]: I1211 16:59:40.648845 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56686966c9-plzl5" podStartSLOduration=3.648827806 podStartE2EDuration="3.648827806s" podCreationTimestamp="2025-12-11 16:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:59:40.647051403 +0000 UTC m=+324.450581460" watchObservedRunningTime="2025-12-11 16:59:40.648827806 +0000 UTC m=+324.452357813" Dec 11 16:59:50 crc kubenswrapper[5129]: E1211 16:59:50.021490 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1c49f_73d9_487f_ba2f_94ad553307e9.slice\": RecentStats: unable to find data in memory cache]" Dec 11 16:59:57 crc kubenswrapper[5129]: I1211 16:59:57.961147 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx"] Dec 11 16:59:57 crc kubenswrapper[5129]: I1211 16:59:57.962346 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" podUID="14aca058-5a50-4afe-b1ec-54428d29ae14" containerName="route-controller-manager" containerID="cri-o://25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e" gracePeriod=30 Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.400172 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.422158 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f"] Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.422678 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14aca058-5a50-4afe-b1ec-54428d29ae14" containerName="route-controller-manager" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.422697 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="14aca058-5a50-4afe-b1ec-54428d29ae14" containerName="route-controller-manager" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.422786 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="14aca058-5a50-4afe-b1ec-54428d29ae14" containerName="route-controller-manager" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.428185 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.439827 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f"] Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541255 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-config\") pod \"14aca058-5a50-4afe-b1ec-54428d29ae14\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541444 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14aca058-5a50-4afe-b1ec-54428d29ae14-tmp\") pod \"14aca058-5a50-4afe-b1ec-54428d29ae14\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541547 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4dh\" (UniqueName: \"kubernetes.io/projected/14aca058-5a50-4afe-b1ec-54428d29ae14-kube-api-access-kq4dh\") pod \"14aca058-5a50-4afe-b1ec-54428d29ae14\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541575 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14aca058-5a50-4afe-b1ec-54428d29ae14-serving-cert\") pod \"14aca058-5a50-4afe-b1ec-54428d29ae14\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541603 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-client-ca\") pod \"14aca058-5a50-4afe-b1ec-54428d29ae14\" (UID: \"14aca058-5a50-4afe-b1ec-54428d29ae14\") " Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541744 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/557b7e91-2194-4ab8-abaa-e2d433abb9bf-tmp\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541797 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/557b7e91-2194-4ab8-abaa-e2d433abb9bf-serving-cert\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541832 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557b7e91-2194-4ab8-abaa-e2d433abb9bf-config\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541898 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/557b7e91-2194-4ab8-abaa-e2d433abb9bf-client-ca\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541963 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7nrn\" (UniqueName: \"kubernetes.io/projected/557b7e91-2194-4ab8-abaa-e2d433abb9bf-kube-api-access-h7nrn\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.541989 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14aca058-5a50-4afe-b1ec-54428d29ae14-tmp" (OuterVolumeSpecName: "tmp") pod "14aca058-5a50-4afe-b1ec-54428d29ae14" (UID: "14aca058-5a50-4afe-b1ec-54428d29ae14"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.542389 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-config" (OuterVolumeSpecName: "config") pod "14aca058-5a50-4afe-b1ec-54428d29ae14" (UID: "14aca058-5a50-4afe-b1ec-54428d29ae14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.542401 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-client-ca" (OuterVolumeSpecName: "client-ca") pod "14aca058-5a50-4afe-b1ec-54428d29ae14" (UID: "14aca058-5a50-4afe-b1ec-54428d29ae14"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.547112 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aca058-5a50-4afe-b1ec-54428d29ae14-kube-api-access-kq4dh" (OuterVolumeSpecName: "kube-api-access-kq4dh") pod "14aca058-5a50-4afe-b1ec-54428d29ae14" (UID: "14aca058-5a50-4afe-b1ec-54428d29ae14"). InnerVolumeSpecName "kube-api-access-kq4dh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.547214 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aca058-5a50-4afe-b1ec-54428d29ae14-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14aca058-5a50-4afe-b1ec-54428d29ae14" (UID: "14aca058-5a50-4afe-b1ec-54428d29ae14"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.642874 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/557b7e91-2194-4ab8-abaa-e2d433abb9bf-client-ca\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.642985 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7nrn\" (UniqueName: \"kubernetes.io/projected/557b7e91-2194-4ab8-abaa-e2d433abb9bf-kube-api-access-h7nrn\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643205 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/557b7e91-2194-4ab8-abaa-e2d433abb9bf-tmp\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643313 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/557b7e91-2194-4ab8-abaa-e2d433abb9bf-serving-cert\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643371 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557b7e91-2194-4ab8-abaa-e2d433abb9bf-config\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643469 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kq4dh\" (UniqueName: \"kubernetes.io/projected/14aca058-5a50-4afe-b1ec-54428d29ae14-kube-api-access-kq4dh\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643486 5129 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14aca058-5a50-4afe-b1ec-54428d29ae14-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643498 5129 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643529 5129 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14aca058-5a50-4afe-b1ec-54428d29ae14-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643543 5129 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14aca058-5a50-4afe-b1ec-54428d29ae14-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643811 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/557b7e91-2194-4ab8-abaa-e2d433abb9bf-client-ca\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.643990 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/557b7e91-2194-4ab8-abaa-e2d433abb9bf-tmp\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.644728 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557b7e91-2194-4ab8-abaa-e2d433abb9bf-config\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.649559 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/557b7e91-2194-4ab8-abaa-e2d433abb9bf-serving-cert\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.661407 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7nrn\" (UniqueName: \"kubernetes.io/projected/557b7e91-2194-4ab8-abaa-e2d433abb9bf-kube-api-access-h7nrn\") pod \"route-controller-manager-588cf57d4c-dxs2f\" (UID: \"557b7e91-2194-4ab8-abaa-e2d433abb9bf\") " pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.745151 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.760193 5129 generic.go:358] "Generic (PLEG): container finished" podID="14aca058-5a50-4afe-b1ec-54428d29ae14" containerID="25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e" exitCode=0 Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.760288 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" event={"ID":"14aca058-5a50-4afe-b1ec-54428d29ae14","Type":"ContainerDied","Data":"25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e"} Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.760338 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.760351 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx" event={"ID":"14aca058-5a50-4afe-b1ec-54428d29ae14","Type":"ContainerDied","Data":"09b469de878fdb7a10311388591278edd8ad41a8200b1e7726455abf68e8d7e5"} Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.760371 5129 scope.go:117] "RemoveContainer" containerID="25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.783960 5129 scope.go:117] "RemoveContainer" containerID="25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e" Dec 11 16:59:58 crc kubenswrapper[5129]: E1211 16:59:58.784415 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e\": container with ID starting with 25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e not found: ID does not exist" containerID="25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.784453 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e"} err="failed to get container status \"25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e\": rpc error: code = NotFound desc = could not find container \"25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e\": container with ID starting with 25d90caf4f2e4a541199dc0902af9c46495d714f31dc01fcb6e3315aad59a77e not found: ID does not exist" Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.800255 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx"] Dec 11 16:59:58 crc kubenswrapper[5129]: I1211 16:59:58.803855 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-858f6c755f-d7ssx"] Dec 11 16:59:59 crc kubenswrapper[5129]: I1211 16:59:59.197285 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f"] Dec 11 16:59:59 crc kubenswrapper[5129]: W1211 16:59:59.212716 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod557b7e91_2194_4ab8_abaa_e2d433abb9bf.slice/crio-6144be1c799f97287d454a4a2fc3bea83a36f5c3d38bb73166891f6f7b02153c WatchSource:0}: Error finding container 6144be1c799f97287d454a4a2fc3bea83a36f5c3d38bb73166891f6f7b02153c: Status 404 returned error can't find the container with id 6144be1c799f97287d454a4a2fc3bea83a36f5c3d38bb73166891f6f7b02153c Dec 11 16:59:59 crc kubenswrapper[5129]: I1211 16:59:59.767765 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" event={"ID":"557b7e91-2194-4ab8-abaa-e2d433abb9bf","Type":"ContainerStarted","Data":"a3fb6d6556b69818ef735a28b77f9740a5ef81645be09f3548b5ef45ba351df2"} Dec 11 16:59:59 crc kubenswrapper[5129]: I1211 16:59:59.767841 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" event={"ID":"557b7e91-2194-4ab8-abaa-e2d433abb9bf","Type":"ContainerStarted","Data":"6144be1c799f97287d454a4a2fc3bea83a36f5c3d38bb73166891f6f7b02153c"} Dec 11 16:59:59 crc kubenswrapper[5129]: I1211 16:59:59.768096 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 16:59:59 crc kubenswrapper[5129]: I1211 16:59:59.802539 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" podStartSLOduration=2.802491247 podStartE2EDuration="2.802491247s" podCreationTimestamp="2025-12-11 16:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:59:59.798478131 +0000 UTC m=+343.602008198" watchObservedRunningTime="2025-12-11 16:59:59.802491247 +0000 UTC m=+343.606021304" Dec 11 16:59:59 crc kubenswrapper[5129]: I1211 16:59:59.951977 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-588cf57d4c-dxs2f" Dec 11 17:00:00 crc kubenswrapper[5129]: E1211 17:00:00.154996 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1c49f_73d9_487f_ba2f_94ad553307e9.slice\": RecentStats: unable to find data in memory cache]" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.180185 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g"] Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.183844 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g"] Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.183865 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.185857 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.185958 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.374287 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2bn\" (UniqueName: \"kubernetes.io/projected/cb06ff79-204a-4686-82a4-c8d7db259a54-kube-api-access-vm2bn\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.374653 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb06ff79-204a-4686-82a4-c8d7db259a54-config-volume\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.374805 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb06ff79-204a-4686-82a4-c8d7db259a54-secret-volume\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.476133 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb06ff79-204a-4686-82a4-c8d7db259a54-config-volume\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.476452 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb06ff79-204a-4686-82a4-c8d7db259a54-secret-volume\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.476698 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2bn\" (UniqueName: \"kubernetes.io/projected/cb06ff79-204a-4686-82a4-c8d7db259a54-kube-api-access-vm2bn\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.477849 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb06ff79-204a-4686-82a4-c8d7db259a54-config-volume\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.492805 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb06ff79-204a-4686-82a4-c8d7db259a54-secret-volume\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.505704 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2bn\" (UniqueName: \"kubernetes.io/projected/cb06ff79-204a-4686-82a4-c8d7db259a54-kube-api-access-vm2bn\") pod \"collect-profiles-29424540-5952g\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.528573 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.533300 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14aca058-5a50-4afe-b1ec-54428d29ae14" path="/var/lib/kubelet/pods/14aca058-5a50-4afe-b1ec-54428d29ae14/volumes" Dec 11 17:00:00 crc kubenswrapper[5129]: I1211 17:00:00.941691 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g"] Dec 11 17:00:00 crc kubenswrapper[5129]: W1211 17:00:00.945730 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb06ff79_204a_4686_82a4_c8d7db259a54.slice/crio-187c6fc16e5912d84e5d4d9fb35d9d0a812767a07d7d2f0e3c816329f0fcf968 WatchSource:0}: Error finding container 187c6fc16e5912d84e5d4d9fb35d9d0a812767a07d7d2f0e3c816329f0fcf968: Status 404 returned error can't find the container with id 187c6fc16e5912d84e5d4d9fb35d9d0a812767a07d7d2f0e3c816329f0fcf968 Dec 11 17:00:01 crc kubenswrapper[5129]: I1211 17:00:01.780935 5129 generic.go:358] "Generic (PLEG): container finished" podID="cb06ff79-204a-4686-82a4-c8d7db259a54" containerID="bfb69c63f04da3c62eb152aca43feebfda58be71a4926e0befbcaef35428c0f8" exitCode=0 Dec 11 17:00:01 crc kubenswrapper[5129]: I1211 17:00:01.781069 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" event={"ID":"cb06ff79-204a-4686-82a4-c8d7db259a54","Type":"ContainerDied","Data":"bfb69c63f04da3c62eb152aca43feebfda58be71a4926e0befbcaef35428c0f8"} Dec 11 17:00:01 crc kubenswrapper[5129]: I1211 17:00:01.781131 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" event={"ID":"cb06ff79-204a-4686-82a4-c8d7db259a54","Type":"ContainerStarted","Data":"187c6fc16e5912d84e5d4d9fb35d9d0a812767a07d7d2f0e3c816329f0fcf968"} Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.206961 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.213963 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb06ff79-204a-4686-82a4-c8d7db259a54-secret-volume\") pod \"cb06ff79-204a-4686-82a4-c8d7db259a54\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.214011 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb06ff79-204a-4686-82a4-c8d7db259a54-config-volume\") pod \"cb06ff79-204a-4686-82a4-c8d7db259a54\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.214065 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2bn\" (UniqueName: \"kubernetes.io/projected/cb06ff79-204a-4686-82a4-c8d7db259a54-kube-api-access-vm2bn\") pod \"cb06ff79-204a-4686-82a4-c8d7db259a54\" (UID: \"cb06ff79-204a-4686-82a4-c8d7db259a54\") " Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.214653 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb06ff79-204a-4686-82a4-c8d7db259a54-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb06ff79-204a-4686-82a4-c8d7db259a54" (UID: "cb06ff79-204a-4686-82a4-c8d7db259a54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.222265 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb06ff79-204a-4686-82a4-c8d7db259a54-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb06ff79-204a-4686-82a4-c8d7db259a54" (UID: "cb06ff79-204a-4686-82a4-c8d7db259a54"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.222265 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb06ff79-204a-4686-82a4-c8d7db259a54-kube-api-access-vm2bn" (OuterVolumeSpecName: "kube-api-access-vm2bn") pod "cb06ff79-204a-4686-82a4-c8d7db259a54" (UID: "cb06ff79-204a-4686-82a4-c8d7db259a54"). InnerVolumeSpecName "kube-api-access-vm2bn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.314930 5129 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb06ff79-204a-4686-82a4-c8d7db259a54-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.314972 5129 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb06ff79-204a-4686-82a4-c8d7db259a54-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.314988 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vm2bn\" (UniqueName: \"kubernetes.io/projected/cb06ff79-204a-4686-82a4-c8d7db259a54-kube-api-access-vm2bn\") on node \"crc\" DevicePath \"\"" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.797711 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" event={"ID":"cb06ff79-204a-4686-82a4-c8d7db259a54","Type":"ContainerDied","Data":"187c6fc16e5912d84e5d4d9fb35d9d0a812767a07d7d2f0e3c816329f0fcf968"} Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.797758 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="187c6fc16e5912d84e5d4d9fb35d9d0a812767a07d7d2f0e3c816329f0fcf968" Dec 11 17:00:03 crc kubenswrapper[5129]: I1211 17:00:03.797767 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424540-5952g" Dec 11 17:00:10 crc kubenswrapper[5129]: E1211 17:00:10.307581 5129 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1c49f_73d9_487f_ba2f_94ad553307e9.slice\": RecentStats: unable to find data in memory cache]" Dec 11 17:01:38 crc kubenswrapper[5129]: I1211 17:01:38.947574 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:01:38 crc kubenswrapper[5129]: I1211 17:01:38.948317 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:02:08 crc kubenswrapper[5129]: I1211 17:02:08.946797 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:02:08 crc kubenswrapper[5129]: I1211 17:02:08.947503 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:02:38 crc kubenswrapper[5129]: I1211 17:02:38.947090 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:02:38 crc kubenswrapper[5129]: I1211 17:02:38.947731 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:02:38 crc kubenswrapper[5129]: I1211 17:02:38.947777 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 17:02:38 crc kubenswrapper[5129]: I1211 17:02:38.948370 5129 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6dc09ad4273c6049f3cdd75c94f381f5b1081c1912d30fe7d468b4b5a0e805e7"} pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 17:02:38 crc kubenswrapper[5129]: I1211 17:02:38.948438 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" containerID="cri-o://6dc09ad4273c6049f3cdd75c94f381f5b1081c1912d30fe7d468b4b5a0e805e7" gracePeriod=600 Dec 11 17:02:39 crc kubenswrapper[5129]: I1211 17:02:39.953285 5129 generic.go:358] "Generic (PLEG): container finished" podID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerID="6dc09ad4273c6049f3cdd75c94f381f5b1081c1912d30fe7d468b4b5a0e805e7" exitCode=0 Dec 11 17:02:39 crc kubenswrapper[5129]: I1211 17:02:39.953410 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerDied","Data":"6dc09ad4273c6049f3cdd75c94f381f5b1081c1912d30fe7d468b4b5a0e805e7"} Dec 11 17:02:39 crc kubenswrapper[5129]: I1211 17:02:39.954032 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"8a11ce0f7bc15e595347b96471f3f4b914409e097a5439477166064a982bf74b"} Dec 11 17:02:39 crc kubenswrapper[5129]: I1211 17:02:39.954059 5129 scope.go:117] "RemoveContainer" containerID="eed0d8912372b478231534e18058ad24e8107a1a4294de3b20010b63410430cf" Dec 11 17:03:58 crc kubenswrapper[5129]: I1211 17:03:58.887955 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc"] Dec 11 17:03:58 crc kubenswrapper[5129]: I1211 17:03:58.889035 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="kube-rbac-proxy" containerID="cri-o://ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8" gracePeriod=30 Dec 11 17:03:58 crc kubenswrapper[5129]: I1211 17:03:58.889187 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="ovnkube-cluster-manager" containerID="cri-o://925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.082352 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097281 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2khpc"] Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097892 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-controller" containerID="cri-o://e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097998 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-acl-logging" containerID="cri-o://075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097903 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="sbdb" containerID="cri-o://df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097931 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="nbdb" containerID="cri-o://86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097949 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097914 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="northd" containerID="cri-o://47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.097965 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-node" containerID="cri-o://6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.119396 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66"] Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.119941 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="kube-rbac-proxy" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.119964 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="kube-rbac-proxy" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.119977 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb06ff79-204a-4686-82a4-c8d7db259a54" containerName="collect-profiles" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.119983 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb06ff79-204a-4686-82a4-c8d7db259a54" containerName="collect-profiles" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.120006 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="ovnkube-cluster-manager" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.120013 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="ovnkube-cluster-manager" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.120100 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="kube-rbac-proxy" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.120116 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerName="ovnkube-cluster-manager" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.120124 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb06ff79-204a-4686-82a4-c8d7db259a54" containerName="collect-profiles" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.163470 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.172951 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovnkube-controller" containerID="cri-o://3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" gracePeriod=30 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.212190 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovnkube-config\") pod \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.212233 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnwmv\" (UniqueName: \"kubernetes.io/projected/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-kube-api-access-pnwmv\") pod \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.212264 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-env-overrides\") pod \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.212305 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovn-control-plane-metrics-cert\") pod \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\" (UID: \"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.213221 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" (UID: "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.213424 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" (UID: "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.219918 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-kube-api-access-pnwmv" (OuterVolumeSpecName: "kube-api-access-pnwmv") pod "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" (UID: "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e"). InnerVolumeSpecName "kube-api-access-pnwmv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.220750 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" (UID: "2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313557 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313639 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313744 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5skx2\" (UniqueName: \"kubernetes.io/projected/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-kube-api-access-5skx2\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313848 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313918 5129 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313946 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pnwmv\" (UniqueName: \"kubernetes.io/projected/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-kube-api-access-pnwmv\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313957 5129 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.313966 5129 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.415082 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.415151 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.415198 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.415219 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5skx2\" (UniqueName: \"kubernetes.io/projected/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-kube-api-access-5skx2\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.415771 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.417333 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.419730 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.432465 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5skx2\" (UniqueName: \"kubernetes.io/projected/d36bdc6a-eabc-4f0e-88d8-49fb94de2f00-kube-api-access-5skx2\") pod \"ovnkube-control-plane-97c9b6c48-qhn66\" (UID: \"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.463750 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2khpc_8bfafb25-f61d-4c63-8e1e-9cba0778559a/ovn-acl-logging/0.log" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.464332 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2khpc_8bfafb25-f61d-4c63-8e1e-9cba0778559a/ovn-controller/0.log" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.464834 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.507467 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5x5b9"] Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508061 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-node" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508084 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-node" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508095 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508101 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508110 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="northd" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508116 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="northd" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508127 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="nbdb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508132 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="nbdb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508139 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-controller" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508145 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-controller" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508159 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovnkube-controller" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508164 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovnkube-controller" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508174 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-acl-logging" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508180 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-acl-logging" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508187 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="sbdb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508192 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="sbdb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508200 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kubecfg-setup" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508206 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kubecfg-setup" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508291 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="northd" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508301 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="nbdb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508307 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="sbdb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508314 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovnkube-controller" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508322 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-controller" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508328 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-node" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508336 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="ovn-acl-logging" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.508341 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.512479 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.546862 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2khpc_8bfafb25-f61d-4c63-8e1e-9cba0778559a/ovn-acl-logging/0.log" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.547711 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2khpc_8bfafb25-f61d-4c63-8e1e-9cba0778559a/ovn-controller/0.log" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.547999 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548020 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548027 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548034 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548040 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548031 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548080 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548093 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548104 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548116 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548125 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548046 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548150 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" exitCode=143 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548157 5129 generic.go:358] "Generic (PLEG): container finished" podID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" containerID="e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" exitCode=143 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548127 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548229 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548239 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548244 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548252 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548262 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548269 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548139 5129 scope.go:117] "RemoveContainer" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548275 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548373 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548383 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548388 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548394 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548400 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548405 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548424 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548442 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548447 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548452 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548457 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548462 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548467 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548471 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548476 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548480 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548487 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2khpc" event={"ID":"8bfafb25-f61d-4c63-8e1e-9cba0778559a","Type":"ContainerDied","Data":"e601bcfe915d79538a0809522d0aec5188d507aaf71bc852ea26b15c1d7f9559"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548535 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548542 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548547 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548552 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548557 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548562 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548567 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548572 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.548577 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550255 5129 generic.go:358] "Generic (PLEG): container finished" podID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerID="925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550273 5129 generic.go:358] "Generic (PLEG): container finished" podID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" containerID="ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8" exitCode=0 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550292 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" event={"ID":"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e","Type":"ContainerDied","Data":"925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550328 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550340 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550355 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" event={"ID":"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e","Type":"ContainerDied","Data":"ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550365 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550373 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550384 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" event={"ID":"2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e","Type":"ContainerDied","Data":"3e091fc2619d0e7d7e4020b59e13b62971c3ad6b881a82129fa3e56e98095cee"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550393 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550400 5129 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.550512 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.552203 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.552240 5129 generic.go:358] "Generic (PLEG): container finished" podID="5313889a-2681-4f68-96f8-d5dfea8d3a8b" containerID="9828ed0e44bb4b999d124985cebaf15596efde2fe8148192b73b4f18b49fb8ff" exitCode=2 Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.552277 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m95zr" event={"ID":"5313889a-2681-4f68-96f8-d5dfea8d3a8b","Type":"ContainerDied","Data":"9828ed0e44bb4b999d124985cebaf15596efde2fe8148192b73b4f18b49fb8ff"} Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.552778 5129 scope.go:117] "RemoveContainer" containerID="9828ed0e44bb4b999d124985cebaf15596efde2fe8148192b73b4f18b49fb8ff" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.571262 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.584549 5129 scope.go:117] "RemoveContainer" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.586595 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc"] Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.590337 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-h4rqc"] Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.605683 5129 scope.go:117] "RemoveContainer" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616421 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-script-lib\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616458 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-etc-openvswitch\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616490 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-netns\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616504 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-env-overrides\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616545 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-netd\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616567 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovn-node-metrics-cert\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616560 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616646 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-bin\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616667 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-node-log\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616681 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-ovn-kubernetes\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616887 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-kubelet\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616917 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jpwl\" (UniqueName: \"kubernetes.io/projected/8bfafb25-f61d-4c63-8e1e-9cba0778559a-kube-api-access-2jpwl\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616935 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-log-socket\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616956 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-config\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.616988 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-openvswitch\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617014 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-systemd-units\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617043 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-ovn\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617066 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-slash\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617097 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-var-lib-openvswitch\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617110 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-systemd\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617107 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617135 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\" (UID: \"8bfafb25-f61d-4c63-8e1e-9cba0778559a\") " Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617260 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-log-socket\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617323 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-env-overrides\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617341 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfpl\" (UniqueName: \"kubernetes.io/projected/3038dce3-6c59-43bd-90ba-cef8432fba2c-kube-api-access-6qfpl\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617420 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovn-node-metrics-cert\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617440 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617447 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617475 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617500 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617535 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-run-netns\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617566 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-node-log" (OuterVolumeSpecName: "node-log") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617571 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovnkube-config\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617591 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617617 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617627 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-node-log\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617637 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617656 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617941 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617959 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617981 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-log-socket" (OuterVolumeSpecName: "log-socket") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.617987 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618017 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618224 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-slash" (OuterVolumeSpecName: "host-slash") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618285 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618319 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618370 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-systemd\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618424 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-cni-bin\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618485 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-var-lib-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618502 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovnkube-script-lib\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618548 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-ovn\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618581 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-systemd-units\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618600 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-kubelet\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618622 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-cni-netd\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618651 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618677 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-etc-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618815 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-slash\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618956 5129 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618973 5129 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618983 5129 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.618992 5129 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619001 5129 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619010 5129 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619018 5129 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619027 5129 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619039 5129 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619051 5129 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619060 5129 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619069 5129 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619077 5129 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bfafb25-f61d-4c63-8e1e-9cba0778559a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619085 5129 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619102 5129 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619111 5129 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.619119 5129 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.620593 5129 scope.go:117] "RemoveContainer" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.624127 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bfafb25-f61d-4c63-8e1e-9cba0778559a-kube-api-access-2jpwl" (OuterVolumeSpecName: "kube-api-access-2jpwl") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "kube-api-access-2jpwl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.626163 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.631510 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8bfafb25-f61d-4c63-8e1e-9cba0778559a" (UID: "8bfafb25-f61d-4c63-8e1e-9cba0778559a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.640514 5129 scope.go:117] "RemoveContainer" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.653050 5129 scope.go:117] "RemoveContainer" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.666188 5129 scope.go:117] "RemoveContainer" containerID="075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.681706 5129 scope.go:117] "RemoveContainer" containerID="e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.699476 5129 scope.go:117] "RemoveContainer" containerID="9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.720827 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.720897 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-systemd\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.720918 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-cni-bin\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.720985 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-var-lib-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721020 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovnkube-script-lib\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721056 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-ovn\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721078 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-systemd-units\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721093 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-kubelet\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721107 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-cni-netd\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721124 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721142 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-etc-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721158 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-slash\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721178 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-log-socket\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721205 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-env-overrides\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721220 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfpl\" (UniqueName: \"kubernetes.io/projected/3038dce3-6c59-43bd-90ba-cef8432fba2c-kube-api-access-6qfpl\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721254 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovn-node-metrics-cert\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721268 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721292 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-run-netns\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721310 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovnkube-config\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721338 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-node-log\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721376 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jpwl\" (UniqueName: \"kubernetes.io/projected/8bfafb25-f61d-4c63-8e1e-9cba0778559a-kube-api-access-2jpwl\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721386 5129 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bfafb25-f61d-4c63-8e1e-9cba0778559a-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721395 5129 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bfafb25-f61d-4c63-8e1e-9cba0778559a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721435 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-node-log\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721468 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721491 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-systemd\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721528 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-cni-bin\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721550 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-var-lib-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721672 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-slash\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721740 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-cni-netd\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721746 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-kubelet\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721788 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721794 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-systemd-units\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721826 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-log-socket\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721872 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-run-netns\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721838 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721836 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-etc-openvswitch\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.721783 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3038dce3-6c59-43bd-90ba-cef8432fba2c-run-ovn\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.722276 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovnkube-script-lib\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.722471 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-env-overrides\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.722580 5129 scope.go:117] "RemoveContainer" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.722789 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovnkube-config\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.723234 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": container with ID starting with 3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695 not found: ID does not exist" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.723353 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} err="failed to get container status \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": rpc error: code = NotFound desc = could not find container \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": container with ID starting with 3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.723408 5129 scope.go:117] "RemoveContainer" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.724077 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": container with ID starting with df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb not found: ID does not exist" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.724123 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} err="failed to get container status \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": rpc error: code = NotFound desc = could not find container \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": container with ID starting with df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.724158 5129 scope.go:117] "RemoveContainer" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.724748 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": container with ID starting with 86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3 not found: ID does not exist" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.724807 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} err="failed to get container status \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": rpc error: code = NotFound desc = could not find container \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": container with ID starting with 86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.724843 5129 scope.go:117] "RemoveContainer" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.725268 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": container with ID starting with 47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e not found: ID does not exist" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.725312 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} err="failed to get container status \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": rpc error: code = NotFound desc = could not find container \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": container with ID starting with 47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.725336 5129 scope.go:117] "RemoveContainer" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.725806 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": container with ID starting with 7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490 not found: ID does not exist" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.725849 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} err="failed to get container status \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": rpc error: code = NotFound desc = could not find container \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": container with ID starting with 7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.725873 5129 scope.go:117] "RemoveContainer" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.728580 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3038dce3-6c59-43bd-90ba-cef8432fba2c-ovn-node-metrics-cert\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.729271 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": container with ID starting with 6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f not found: ID does not exist" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.729322 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} err="failed to get container status \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": rpc error: code = NotFound desc = could not find container \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": container with ID starting with 6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.729351 5129 scope.go:117] "RemoveContainer" containerID="075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.729723 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": container with ID starting with 075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847 not found: ID does not exist" containerID="075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.729787 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} err="failed to get container status \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": rpc error: code = NotFound desc = could not find container \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": container with ID starting with 075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.729817 5129 scope.go:117] "RemoveContainer" containerID="e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.730159 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": container with ID starting with e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d not found: ID does not exist" containerID="e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.730213 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} err="failed to get container status \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": rpc error: code = NotFound desc = could not find container \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": container with ID starting with e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.730231 5129 scope.go:117] "RemoveContainer" containerID="9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750" Dec 11 17:03:59 crc kubenswrapper[5129]: E1211 17:03:59.730491 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": container with ID starting with 9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750 not found: ID does not exist" containerID="9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.730548 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} err="failed to get container status \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": rpc error: code = NotFound desc = could not find container \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": container with ID starting with 9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.730567 5129 scope.go:117] "RemoveContainer" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.730876 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} err="failed to get container status \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": rpc error: code = NotFound desc = could not find container \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": container with ID starting with 3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.730901 5129 scope.go:117] "RemoveContainer" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731230 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} err="failed to get container status \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": rpc error: code = NotFound desc = could not find container \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": container with ID starting with df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731253 5129 scope.go:117] "RemoveContainer" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731476 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} err="failed to get container status \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": rpc error: code = NotFound desc = could not find container \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": container with ID starting with 86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731493 5129 scope.go:117] "RemoveContainer" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731766 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} err="failed to get container status \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": rpc error: code = NotFound desc = could not find container \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": container with ID starting with 47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731784 5129 scope.go:117] "RemoveContainer" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731949 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} err="failed to get container status \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": rpc error: code = NotFound desc = could not find container \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": container with ID starting with 7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.731968 5129 scope.go:117] "RemoveContainer" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732160 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} err="failed to get container status \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": rpc error: code = NotFound desc = could not find container \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": container with ID starting with 6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732178 5129 scope.go:117] "RemoveContainer" containerID="075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732545 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} err="failed to get container status \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": rpc error: code = NotFound desc = could not find container \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": container with ID starting with 075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732564 5129 scope.go:117] "RemoveContainer" containerID="e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732750 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} err="failed to get container status \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": rpc error: code = NotFound desc = could not find container \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": container with ID starting with e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732770 5129 scope.go:117] "RemoveContainer" containerID="9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732932 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} err="failed to get container status \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": rpc error: code = NotFound desc = could not find container \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": container with ID starting with 9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.732949 5129 scope.go:117] "RemoveContainer" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733151 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} err="failed to get container status \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": rpc error: code = NotFound desc = could not find container \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": container with ID starting with 3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733167 5129 scope.go:117] "RemoveContainer" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733334 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} err="failed to get container status \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": rpc error: code = NotFound desc = could not find container \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": container with ID starting with df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733359 5129 scope.go:117] "RemoveContainer" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733664 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} err="failed to get container status \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": rpc error: code = NotFound desc = could not find container \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": container with ID starting with 86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733703 5129 scope.go:117] "RemoveContainer" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733949 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} err="failed to get container status \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": rpc error: code = NotFound desc = could not find container \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": container with ID starting with 47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.733966 5129 scope.go:117] "RemoveContainer" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734138 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} err="failed to get container status \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": rpc error: code = NotFound desc = could not find container \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": container with ID starting with 7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734154 5129 scope.go:117] "RemoveContainer" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734380 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} err="failed to get container status \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": rpc error: code = NotFound desc = could not find container \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": container with ID starting with 6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734398 5129 scope.go:117] "RemoveContainer" containerID="075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734642 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} err="failed to get container status \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": rpc error: code = NotFound desc = could not find container \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": container with ID starting with 075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734683 5129 scope.go:117] "RemoveContainer" containerID="e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734958 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} err="failed to get container status \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": rpc error: code = NotFound desc = could not find container \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": container with ID starting with e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.734976 5129 scope.go:117] "RemoveContainer" containerID="9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.735291 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} err="failed to get container status \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": rpc error: code = NotFound desc = could not find container \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": container with ID starting with 9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.735312 5129 scope.go:117] "RemoveContainer" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.735755 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} err="failed to get container status \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": rpc error: code = NotFound desc = could not find container \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": container with ID starting with 3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.735774 5129 scope.go:117] "RemoveContainer" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.736064 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} err="failed to get container status \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": rpc error: code = NotFound desc = could not find container \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": container with ID starting with df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.736081 5129 scope.go:117] "RemoveContainer" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.736303 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} err="failed to get container status \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": rpc error: code = NotFound desc = could not find container \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": container with ID starting with 86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.736321 5129 scope.go:117] "RemoveContainer" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.736562 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} err="failed to get container status \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": rpc error: code = NotFound desc = could not find container \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": container with ID starting with 47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.736584 5129 scope.go:117] "RemoveContainer" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.736983 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} err="failed to get container status \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": rpc error: code = NotFound desc = could not find container \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": container with ID starting with 7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.737006 5129 scope.go:117] "RemoveContainer" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.737921 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} err="failed to get container status \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": rpc error: code = NotFound desc = could not find container \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": container with ID starting with 6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.737939 5129 scope.go:117] "RemoveContainer" containerID="075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.738298 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847"} err="failed to get container status \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": rpc error: code = NotFound desc = could not find container \"075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847\": container with ID starting with 075505a61412815968b74bd2f02bcb0b0d4edba212cf61ac7c2f9da1bca97847 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.738316 5129 scope.go:117] "RemoveContainer" containerID="e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.738507 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d"} err="failed to get container status \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": rpc error: code = NotFound desc = could not find container \"e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d\": container with ID starting with e9ec61c5b300b63abef695c7cc65a19f54a4a0ab6664d70f999c66c64048471d not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.738559 5129 scope.go:117] "RemoveContainer" containerID="9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.738947 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750"} err="failed to get container status \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": rpc error: code = NotFound desc = could not find container \"9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750\": container with ID starting with 9236d514a009fccfcd45b60a5528bf80768fe83ff5cdb8c5d1259e6f49518750 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.738969 5129 scope.go:117] "RemoveContainer" containerID="3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.739164 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695"} err="failed to get container status \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": rpc error: code = NotFound desc = could not find container \"3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695\": container with ID starting with 3829edcc5810de7970bc7da7ebda56d21b60b417fe38e815bc99512a896dc695 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.739180 5129 scope.go:117] "RemoveContainer" containerID="df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.739518 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb"} err="failed to get container status \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": rpc error: code = NotFound desc = could not find container \"df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb\": container with ID starting with df1d153c7c06e15a4c02be97c83a995b7e5e1759ec09bea982829200ee047fcb not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.739552 5129 scope.go:117] "RemoveContainer" containerID="86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.740138 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3"} err="failed to get container status \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": rpc error: code = NotFound desc = could not find container \"86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3\": container with ID starting with 86d30919629c39cb0b4dc74c515db4c720db9ed311e24e058a4de77642aa6ae3 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.740167 5129 scope.go:117] "RemoveContainer" containerID="47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.740425 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e"} err="failed to get container status \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": rpc error: code = NotFound desc = could not find container \"47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e\": container with ID starting with 47e010adc9268ee4d981f3a884321a00cbe70df0887d9ea8b8f6130b9fce868e not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.740443 5129 scope.go:117] "RemoveContainer" containerID="7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.740901 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490"} err="failed to get container status \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": rpc error: code = NotFound desc = could not find container \"7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490\": container with ID starting with 7841cb5ef79e5ecb2433dbee4f043dfd7f919c9d59aad48e0a0d2374d5954490 not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.740920 5129 scope.go:117] "RemoveContainer" containerID="6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.741369 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f"} err="failed to get container status \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": rpc error: code = NotFound desc = could not find container \"6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f\": container with ID starting with 6fccf09d18d90b1acd76130cc7859395d4ae7027a1af53b125d4caa03e3a3b7f not found: ID does not exist" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.749252 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfpl\" (UniqueName: \"kubernetes.io/projected/3038dce3-6c59-43bd-90ba-cef8432fba2c-kube-api-access-6qfpl\") pod \"ovnkube-node-5x5b9\" (UID: \"3038dce3-6c59-43bd-90ba-cef8432fba2c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.824109 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:03:59 crc kubenswrapper[5129]: W1211 17:03:59.851319 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3038dce3_6c59_43bd_90ba_cef8432fba2c.slice/crio-799212867e3c155a5c28a10bbc693ab725ebee57904dbfe531ecaa9ab4ee66ac WatchSource:0}: Error finding container 799212867e3c155a5c28a10bbc693ab725ebee57904dbfe531ecaa9ab4ee66ac: Status 404 returned error can't find the container with id 799212867e3c155a5c28a10bbc693ab725ebee57904dbfe531ecaa9ab4ee66ac Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.897660 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2khpc"] Dec 11 17:03:59 crc kubenswrapper[5129]: I1211 17:03:59.902830 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2khpc"] Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.535633 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e" path="/var/lib/kubelet/pods/2c60ead5-8f9c-4cc8-9a60-27d7967e1f2e/volumes" Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.537505 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bfafb25-f61d-4c63-8e1e-9cba0778559a" path="/var/lib/kubelet/pods/8bfafb25-f61d-4c63-8e1e-9cba0778559a/volumes" Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.562146 5129 generic.go:358] "Generic (PLEG): container finished" podID="3038dce3-6c59-43bd-90ba-cef8432fba2c" containerID="67e25bb608c0b9d47b055c809dbe4dacfb97ccae18bf45f6f7bef85ffb21462d" exitCode=0 Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.562207 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerDied","Data":"67e25bb608c0b9d47b055c809dbe4dacfb97ccae18bf45f6f7bef85ffb21462d"} Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.562275 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"799212867e3c155a5c28a10bbc693ab725ebee57904dbfe531ecaa9ab4ee66ac"} Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.565033 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.565171 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m95zr" event={"ID":"5313889a-2681-4f68-96f8-d5dfea8d3a8b","Type":"ContainerStarted","Data":"91b7c7fc6185c73c6e2a62656d7628fc0166e628da7de8ffb63f849d89b53ce6"} Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.570730 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" event={"ID":"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00","Type":"ContainerStarted","Data":"1cf91f9b760e435b26d224ef56e357237ba084e4ff84aa63a979e78a0c3bf4e9"} Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.570815 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" event={"ID":"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00","Type":"ContainerStarted","Data":"d5f33acc9a778521634e4f0a533b177fa4d4ad49697caa2461c66feba04a99d9"} Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.570842 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" event={"ID":"d36bdc6a-eabc-4f0e-88d8-49fb94de2f00","Type":"ContainerStarted","Data":"1c903d4ef467475b030acfe5d61147395b7822396b9b090a16d1754874ff037f"} Dec 11 17:04:00 crc kubenswrapper[5129]: I1211 17:04:00.662281 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qhn66" podStartSLOduration=2.662264472 podStartE2EDuration="2.662264472s" podCreationTimestamp="2025-12-11 17:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 17:04:00.660483001 +0000 UTC m=+584.464013068" watchObservedRunningTime="2025-12-11 17:04:00.662264472 +0000 UTC m=+584.465794489" Dec 11 17:04:01 crc kubenswrapper[5129]: I1211 17:04:01.580696 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"7931bb632c62a83ee7c3b89a0bff0b7b81126770f7e639a379a0d4c8bc759e52"} Dec 11 17:04:01 crc kubenswrapper[5129]: I1211 17:04:01.580744 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"6bd810d670fa4c1d669af74f96f5262793b8e667ee146302323f7083221214ea"} Dec 11 17:04:01 crc kubenswrapper[5129]: I1211 17:04:01.580757 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"ad82e647f578583766bf3f737e725bdd23606efd7f2e10650c84d9aae2f898be"} Dec 11 17:04:01 crc kubenswrapper[5129]: I1211 17:04:01.580771 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"3f7a05d316fd9c551236cd4189443d27289f35ab2005715a8d1915f1700d6429"} Dec 11 17:04:01 crc kubenswrapper[5129]: I1211 17:04:01.580780 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"3c8c1b417c69aee5ea7d0a156d4f8395c78f50139b505fb9bef3aaab82a15199"} Dec 11 17:04:01 crc kubenswrapper[5129]: I1211 17:04:01.580791 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"a945c14ac9f68a52cb6b18575bd1f88d86add1e3f978f16f4cd3d463b3084cf7"} Dec 11 17:04:04 crc kubenswrapper[5129]: I1211 17:04:04.606820 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"12756597f650d8093e10cfcc65e1934a5e9be0257f4a1120ba1cdd8b4c0aad81"} Dec 11 17:04:06 crc kubenswrapper[5129]: I1211 17:04:06.628003 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" event={"ID":"3038dce3-6c59-43bd-90ba-cef8432fba2c","Type":"ContainerStarted","Data":"42ea29f2bd94f9a7d465cf59ca53b2a96882a4740f2ae79e23ccaed0a96c0477"} Dec 11 17:04:06 crc kubenswrapper[5129]: I1211 17:04:06.628625 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:04:06 crc kubenswrapper[5129]: I1211 17:04:06.628658 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:04:06 crc kubenswrapper[5129]: I1211 17:04:06.628675 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:04:06 crc kubenswrapper[5129]: I1211 17:04:06.664634 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:04:06 crc kubenswrapper[5129]: I1211 17:04:06.675647 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:04:06 crc kubenswrapper[5129]: I1211 17:04:06.722111 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" podStartSLOduration=7.72209576 podStartE2EDuration="7.72209576s" podCreationTimestamp="2025-12-11 17:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 17:04:06.683283855 +0000 UTC m=+590.486813932" watchObservedRunningTime="2025-12-11 17:04:06.72209576 +0000 UTC m=+590.525625777" Dec 11 17:04:16 crc kubenswrapper[5129]: I1211 17:04:16.766476 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:04:16 crc kubenswrapper[5129]: I1211 17:04:16.779216 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:04:16 crc kubenswrapper[5129]: I1211 17:04:16.782978 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:04:16 crc kubenswrapper[5129]: I1211 17:04:16.788156 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:04:16 crc kubenswrapper[5129]: I1211 17:04:16.803073 5129 scope.go:117] "RemoveContainer" containerID="925f1650dfafc30a24660cda91c3ba71a86b4b836cd1a4adcdcecfc72be7f3c6" Dec 11 17:04:16 crc kubenswrapper[5129]: I1211 17:04:16.821553 5129 scope.go:117] "RemoveContainer" containerID="ac203eb7d3a66df9d77a66f824862b7d420d5f379e0af655cf41c71c2d58c7f8" Dec 11 17:04:26 crc kubenswrapper[5129]: I1211 17:04:26.809507 5129 ???:1] "http: TLS handshake error from 192.168.126.11:51304: no serving certificate available for the kubelet" Dec 11 17:04:38 crc kubenswrapper[5129]: I1211 17:04:38.661692 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5x5b9" Dec 11 17:05:08 crc kubenswrapper[5129]: I1211 17:05:08.946815 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:05:08 crc kubenswrapper[5129]: I1211 17:05:08.947792 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:05:16 crc kubenswrapper[5129]: I1211 17:05:16.859266 5129 scope.go:117] "RemoveContainer" containerID="ba87c4a8a0a5e814818b404929eb9db675d21cdbb1c3d1d1067f56414f2e57fb" Dec 11 17:05:16 crc kubenswrapper[5129]: I1211 17:05:16.883074 5129 scope.go:117] "RemoveContainer" containerID="28a07cfd9edc06e5929e2397f699d23b1d456f5ba707a869786ed4d695b18203" Dec 11 17:05:16 crc kubenswrapper[5129]: I1211 17:05:16.944085 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hlcw"] Dec 11 17:05:16 crc kubenswrapper[5129]: I1211 17:05:16.944433 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5hlcw" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="registry-server" containerID="cri-o://0ff8eeb6a2322cd3466dd21372ffd8b50b455159d3aa8b15349823ce1f0b5298" gracePeriod=30 Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.234637 5129 generic.go:358] "Generic (PLEG): container finished" podID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerID="0ff8eeb6a2322cd3466dd21372ffd8b50b455159d3aa8b15349823ce1f0b5298" exitCode=0 Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.234755 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hlcw" event={"ID":"3b8f5017-2bba-4282-afb7-a8728ec2a378","Type":"ContainerDied","Data":"0ff8eeb6a2322cd3466dd21372ffd8b50b455159d3aa8b15349823ce1f0b5298"} Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.323005 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.490658 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-utilities\") pod \"3b8f5017-2bba-4282-afb7-a8728ec2a378\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.490743 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbc7r\" (UniqueName: \"kubernetes.io/projected/3b8f5017-2bba-4282-afb7-a8728ec2a378-kube-api-access-jbc7r\") pod \"3b8f5017-2bba-4282-afb7-a8728ec2a378\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.490839 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-catalog-content\") pod \"3b8f5017-2bba-4282-afb7-a8728ec2a378\" (UID: \"3b8f5017-2bba-4282-afb7-a8728ec2a378\") " Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.492498 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-utilities" (OuterVolumeSpecName: "utilities") pod "3b8f5017-2bba-4282-afb7-a8728ec2a378" (UID: "3b8f5017-2bba-4282-afb7-a8728ec2a378"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.497764 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8f5017-2bba-4282-afb7-a8728ec2a378-kube-api-access-jbc7r" (OuterVolumeSpecName: "kube-api-access-jbc7r") pod "3b8f5017-2bba-4282-afb7-a8728ec2a378" (UID: "3b8f5017-2bba-4282-afb7-a8728ec2a378"). InnerVolumeSpecName "kube-api-access-jbc7r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.503196 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b8f5017-2bba-4282-afb7-a8728ec2a378" (UID: "3b8f5017-2bba-4282-afb7-a8728ec2a378"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.592800 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jbc7r\" (UniqueName: \"kubernetes.io/projected/3b8f5017-2bba-4282-afb7-a8728ec2a378-kube-api-access-jbc7r\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.592842 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:17 crc kubenswrapper[5129]: I1211 17:05:17.592856 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8f5017-2bba-4282-afb7-a8728ec2a378-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.028325 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rxwmh"] Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.029242 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="extract-utilities" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.029264 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="extract-utilities" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.029330 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="registry-server" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.029342 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="registry-server" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.029363 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="extract-content" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.029374 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="extract-content" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.029590 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" containerName="registry-server" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.117603 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rxwmh"] Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.117763 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200453 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200539 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c820eb3-363d-4598-a4d8-ed07a1555ff8-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200570 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c820eb3-363d-4598-a4d8-ed07a1555ff8-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200599 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200618 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvpk\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-kube-api-access-kbvpk\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200715 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c820eb3-363d-4598-a4d8-ed07a1555ff8-trusted-ca\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200741 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-registry-tls\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.200767 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c820eb3-363d-4598-a4d8-ed07a1555ff8-registry-certificates\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.223208 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.243349 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hlcw" event={"ID":"3b8f5017-2bba-4282-afb7-a8728ec2a378","Type":"ContainerDied","Data":"07d6b227155851c3f886391ed32ea13c4e22f3b8eb96955989a4cce7b007484b"} Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.243401 5129 scope.go:117] "RemoveContainer" containerID="0ff8eeb6a2322cd3466dd21372ffd8b50b455159d3aa8b15349823ce1f0b5298" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.243433 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hlcw" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.260630 5129 scope.go:117] "RemoveContainer" containerID="89658c39c18c3e74dde3abb21d5fd8fde8e5f048b4f4cf1bba895d3455b3f4ae" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.276553 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hlcw"] Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.284403 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hlcw"] Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.298528 5129 scope.go:117] "RemoveContainer" containerID="0f6c81b6c58e37d59a87f62527a545db9127331bc952a243dc3f369eb1c63abd" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.302015 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c820eb3-363d-4598-a4d8-ed07a1555ff8-registry-certificates\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.302104 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c820eb3-363d-4598-a4d8-ed07a1555ff8-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.302144 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c820eb3-363d-4598-a4d8-ed07a1555ff8-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.302182 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.302209 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kbvpk\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-kube-api-access-kbvpk\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.302245 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c820eb3-363d-4598-a4d8-ed07a1555ff8-trusted-ca\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.302275 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-registry-tls\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.303763 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c820eb3-363d-4598-a4d8-ed07a1555ff8-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.305058 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c820eb3-363d-4598-a4d8-ed07a1555ff8-trusted-ca\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.305700 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c820eb3-363d-4598-a4d8-ed07a1555ff8-registry-certificates\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.308019 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-registry-tls\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.311765 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c820eb3-363d-4598-a4d8-ed07a1555ff8-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.325030 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.329367 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbvpk\" (UniqueName: \"kubernetes.io/projected/1c820eb3-363d-4598-a4d8-ed07a1555ff8-kube-api-access-kbvpk\") pod \"image-registry-5d9d95bf5b-rxwmh\" (UID: \"1c820eb3-363d-4598-a4d8-ed07a1555ff8\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.433550 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.531786 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8f5017-2bba-4282-afb7-a8728ec2a378" path="/var/lib/kubelet/pods/3b8f5017-2bba-4282-afb7-a8728ec2a378/volumes" Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.647088 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rxwmh"] Dec 11 17:05:18 crc kubenswrapper[5129]: I1211 17:05:18.654892 5129 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 17:05:19 crc kubenswrapper[5129]: I1211 17:05:19.250722 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" event={"ID":"1c820eb3-363d-4598-a4d8-ed07a1555ff8","Type":"ContainerStarted","Data":"d85baacf1ec0b73c24f05d58554a52481fd37b404ca799af9ea5b66c9984694f"} Dec 11 17:05:19 crc kubenswrapper[5129]: I1211 17:05:19.251073 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" event={"ID":"1c820eb3-363d-4598-a4d8-ed07a1555ff8","Type":"ContainerStarted","Data":"bf41dc55f1b2525c69447d3390ee2b292b569fed5161ae1448e2681b5d36bc63"} Dec 11 17:05:19 crc kubenswrapper[5129]: I1211 17:05:19.251088 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:19 crc kubenswrapper[5129]: I1211 17:05:19.271131 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" podStartSLOduration=1.271113188 podStartE2EDuration="1.271113188s" podCreationTimestamp="2025-12-11 17:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 17:05:19.270136428 +0000 UTC m=+663.073666455" watchObservedRunningTime="2025-12-11 17:05:19.271113188 +0000 UTC m=+663.074643205" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.631988 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452"] Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.650748 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452"] Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.650920 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.653577 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.736064 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9dx\" (UniqueName: \"kubernetes.io/projected/6d485eb2-660e-4bbf-acc1-4c1c271c7237-kube-api-access-cl9dx\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.736486 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.736702 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.838097 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.838235 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9dx\" (UniqueName: \"kubernetes.io/projected/6d485eb2-660e-4bbf-acc1-4c1c271c7237-kube-api-access-cl9dx\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.838332 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.838883 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.839016 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.871691 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9dx\" (UniqueName: \"kubernetes.io/projected/6d485eb2-660e-4bbf-acc1-4c1c271c7237-kube-api-access-cl9dx\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:20 crc kubenswrapper[5129]: I1211 17:05:20.969947 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:21 crc kubenswrapper[5129]: I1211 17:05:21.213470 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452"] Dec 11 17:05:21 crc kubenswrapper[5129]: W1211 17:05:21.219345 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d485eb2_660e_4bbf_acc1_4c1c271c7237.slice/crio-820a36f349edef6cb914eb3a2937b84c21b7d1eff2cfd98fd5cb54c3e51040df WatchSource:0}: Error finding container 820a36f349edef6cb914eb3a2937b84c21b7d1eff2cfd98fd5cb54c3e51040df: Status 404 returned error can't find the container with id 820a36f349edef6cb914eb3a2937b84c21b7d1eff2cfd98fd5cb54c3e51040df Dec 11 17:05:21 crc kubenswrapper[5129]: I1211 17:05:21.262802 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" event={"ID":"6d485eb2-660e-4bbf-acc1-4c1c271c7237","Type":"ContainerStarted","Data":"820a36f349edef6cb914eb3a2937b84c21b7d1eff2cfd98fd5cb54c3e51040df"} Dec 11 17:05:22 crc kubenswrapper[5129]: I1211 17:05:22.273092 5129 generic.go:358] "Generic (PLEG): container finished" podID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerID="e6c5a0b7cb73b66759387b21f406af682f9519d001b636463953d822fa763f86" exitCode=0 Dec 11 17:05:22 crc kubenswrapper[5129]: I1211 17:05:22.273183 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" event={"ID":"6d485eb2-660e-4bbf-acc1-4c1c271c7237","Type":"ContainerDied","Data":"e6c5a0b7cb73b66759387b21f406af682f9519d001b636463953d822fa763f86"} Dec 11 17:05:24 crc kubenswrapper[5129]: I1211 17:05:24.288227 5129 generic.go:358] "Generic (PLEG): container finished" podID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerID="06c6e2b1cb0490608d7104f4bb3b9c572827b1a56fd5e35c18fff7bf7919c4ad" exitCode=0 Dec 11 17:05:24 crc kubenswrapper[5129]: I1211 17:05:24.288276 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" event={"ID":"6d485eb2-660e-4bbf-acc1-4c1c271c7237","Type":"ContainerDied","Data":"06c6e2b1cb0490608d7104f4bb3b9c572827b1a56fd5e35c18fff7bf7919c4ad"} Dec 11 17:05:25 crc kubenswrapper[5129]: I1211 17:05:25.305721 5129 generic.go:358] "Generic (PLEG): container finished" podID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerID="e961dfba1821cb3ab7fc741bba5075271e65a633e9f4e1d4037b9c5c35abe7c6" exitCode=0 Dec 11 17:05:25 crc kubenswrapper[5129]: I1211 17:05:25.306493 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" event={"ID":"6d485eb2-660e-4bbf-acc1-4c1c271c7237","Type":"ContainerDied","Data":"e961dfba1821cb3ab7fc741bba5075271e65a633e9f4e1d4037b9c5c35abe7c6"} Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.650977 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.832305 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl9dx\" (UniqueName: \"kubernetes.io/projected/6d485eb2-660e-4bbf-acc1-4c1c271c7237-kube-api-access-cl9dx\") pod \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.832411 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-util\") pod \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.833443 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-bundle\") pod \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\" (UID: \"6d485eb2-660e-4bbf-acc1-4c1c271c7237\") " Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.836554 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-bundle" (OuterVolumeSpecName: "bundle") pod "6d485eb2-660e-4bbf-acc1-4c1c271c7237" (UID: "6d485eb2-660e-4bbf-acc1-4c1c271c7237"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.842360 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d485eb2-660e-4bbf-acc1-4c1c271c7237-kube-api-access-cl9dx" (OuterVolumeSpecName: "kube-api-access-cl9dx") pod "6d485eb2-660e-4bbf-acc1-4c1c271c7237" (UID: "6d485eb2-660e-4bbf-acc1-4c1c271c7237"). InnerVolumeSpecName "kube-api-access-cl9dx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.850326 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-util" (OuterVolumeSpecName: "util") pod "6d485eb2-660e-4bbf-acc1-4c1c271c7237" (UID: "6d485eb2-660e-4bbf-acc1-4c1c271c7237"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.934801 5129 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-util\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.934849 5129 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6d485eb2-660e-4bbf-acc1-4c1c271c7237-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:26 crc kubenswrapper[5129]: I1211 17:05:26.934864 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cl9dx\" (UniqueName: \"kubernetes.io/projected/6d485eb2-660e-4bbf-acc1-4c1c271c7237-kube-api-access-cl9dx\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.041424 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr"] Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.042862 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerName="util" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.043020 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerName="util" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.043123 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerName="extract" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.043229 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerName="extract" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.043316 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerName="pull" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.043407 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerName="pull" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.043656 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d485eb2-660e-4bbf-acc1-4c1c271c7237" containerName="extract" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.329761 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" event={"ID":"6d485eb2-660e-4bbf-acc1-4c1c271c7237","Type":"ContainerDied","Data":"820a36f349edef6cb914eb3a2937b84c21b7d1eff2cfd98fd5cb54c3e51040df"} Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.330683 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820a36f349edef6cb914eb3a2937b84c21b7d1eff2cfd98fd5cb54c3e51040df" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.330720 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.330741 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr"] Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.330912 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107h452" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.338916 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j76g\" (UniqueName: \"kubernetes.io/projected/9763ee46-b167-42fa-8115-0b58a4edbb28-kube-api-access-5j76g\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.339039 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.339094 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.440347 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5j76g\" (UniqueName: \"kubernetes.io/projected/9763ee46-b167-42fa-8115-0b58a4edbb28-kube-api-access-5j76g\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.440454 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.440532 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.441214 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.441359 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.465320 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j76g\" (UniqueName: \"kubernetes.io/projected/9763ee46-b167-42fa-8115-0b58a4edbb28-kube-api-access-5j76g\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.648755 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:27 crc kubenswrapper[5129]: I1211 17:05:27.849953 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr"] Dec 11 17:05:28 crc kubenswrapper[5129]: I1211 17:05:28.334809 5129 generic.go:358] "Generic (PLEG): container finished" podID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerID="f6e0f4b3e6fbd716bd54346c091d0977ebf16a44f44de8b191276e1ad40641d2" exitCode=0 Dec 11 17:05:28 crc kubenswrapper[5129]: I1211 17:05:28.334912 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" event={"ID":"9763ee46-b167-42fa-8115-0b58a4edbb28","Type":"ContainerDied","Data":"f6e0f4b3e6fbd716bd54346c091d0977ebf16a44f44de8b191276e1ad40641d2"} Dec 11 17:05:28 crc kubenswrapper[5129]: I1211 17:05:28.334972 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" event={"ID":"9763ee46-b167-42fa-8115-0b58a4edbb28","Type":"ContainerStarted","Data":"6b3b1923d52a322261c2594db7c39a57a22cad3a5fe8e82d886695217711a895"} Dec 11 17:05:30 crc kubenswrapper[5129]: I1211 17:05:30.349375 5129 generic.go:358] "Generic (PLEG): container finished" podID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerID="9e8506ec13116f8832c64408517b31e10e2fa5e2866a4698b7353fa421bf269b" exitCode=0 Dec 11 17:05:30 crc kubenswrapper[5129]: I1211 17:05:30.349487 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" event={"ID":"9763ee46-b167-42fa-8115-0b58a4edbb28","Type":"ContainerDied","Data":"9e8506ec13116f8832c64408517b31e10e2fa5e2866a4698b7353fa421bf269b"} Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.144694 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c"] Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.149006 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.191285 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.191342 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.191439 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tblv\" (UniqueName: \"kubernetes.io/projected/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-kube-api-access-9tblv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.257291 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c"] Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.296244 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.296314 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.296362 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tblv\" (UniqueName: \"kubernetes.io/projected/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-kube-api-access-9tblv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.297196 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.297485 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.352882 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tblv\" (UniqueName: \"kubernetes.io/projected/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-kube-api-access-9tblv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.359537 5129 generic.go:358] "Generic (PLEG): container finished" podID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerID="8b37243d76bf109e45bf4c6fa407b93d98fd24a19cca07afdd66732ec4d28798" exitCode=0 Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.359596 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" event={"ID":"9763ee46-b167-42fa-8115-0b58a4edbb28","Type":"ContainerDied","Data":"8b37243d76bf109e45bf4c6fa407b93d98fd24a19cca07afdd66732ec4d28798"} Dec 11 17:05:31 crc kubenswrapper[5129]: I1211 17:05:31.461520 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.185552 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c"] Dec 11 17:05:32 crc kubenswrapper[5129]: W1211 17:05:32.191654 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7306a6f_a1ea_4e9a_9948_fe7fe019ca5b.slice/crio-2d163b065ee92e9c9c480e85300c67ccb7413066bec9457214c662c987b90397 WatchSource:0}: Error finding container 2d163b065ee92e9c9c480e85300c67ccb7413066bec9457214c662c987b90397: Status 404 returned error can't find the container with id 2d163b065ee92e9c9c480e85300c67ccb7413066bec9457214c662c987b90397 Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.366255 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" event={"ID":"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b","Type":"ContainerStarted","Data":"50d09261c5f8b22521defc71be5587084811c5233295b0bac94aae390be06016"} Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.366687 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" event={"ID":"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b","Type":"ContainerStarted","Data":"2d163b065ee92e9c9c480e85300c67ccb7413066bec9457214c662c987b90397"} Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.753859 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.823944 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-util\") pod \"9763ee46-b167-42fa-8115-0b58a4edbb28\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.824013 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-bundle\") pod \"9763ee46-b167-42fa-8115-0b58a4edbb28\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.824137 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j76g\" (UniqueName: \"kubernetes.io/projected/9763ee46-b167-42fa-8115-0b58a4edbb28-kube-api-access-5j76g\") pod \"9763ee46-b167-42fa-8115-0b58a4edbb28\" (UID: \"9763ee46-b167-42fa-8115-0b58a4edbb28\") " Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.825994 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-bundle" (OuterVolumeSpecName: "bundle") pod "9763ee46-b167-42fa-8115-0b58a4edbb28" (UID: "9763ee46-b167-42fa-8115-0b58a4edbb28"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.846679 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-util" (OuterVolumeSpecName: "util") pod "9763ee46-b167-42fa-8115-0b58a4edbb28" (UID: "9763ee46-b167-42fa-8115-0b58a4edbb28"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.921573 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9763ee46-b167-42fa-8115-0b58a4edbb28-kube-api-access-5j76g" (OuterVolumeSpecName: "kube-api-access-5j76g") pod "9763ee46-b167-42fa-8115-0b58a4edbb28" (UID: "9763ee46-b167-42fa-8115-0b58a4edbb28"). InnerVolumeSpecName "kube-api-access-5j76g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.925280 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5j76g\" (UniqueName: \"kubernetes.io/projected/9763ee46-b167-42fa-8115-0b58a4edbb28-kube-api-access-5j76g\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.925321 5129 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-util\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:32 crc kubenswrapper[5129]: I1211 17:05:32.925336 5129 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9763ee46-b167-42fa-8115-0b58a4edbb28-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:33 crc kubenswrapper[5129]: I1211 17:05:33.372917 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" Dec 11 17:05:33 crc kubenswrapper[5129]: I1211 17:05:33.372911 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejljxr" event={"ID":"9763ee46-b167-42fa-8115-0b58a4edbb28","Type":"ContainerDied","Data":"6b3b1923d52a322261c2594db7c39a57a22cad3a5fe8e82d886695217711a895"} Dec 11 17:05:33 crc kubenswrapper[5129]: I1211 17:05:33.373065 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3b1923d52a322261c2594db7c39a57a22cad3a5fe8e82d886695217711a895" Dec 11 17:05:33 crc kubenswrapper[5129]: I1211 17:05:33.374238 5129 generic.go:358] "Generic (PLEG): container finished" podID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerID="50d09261c5f8b22521defc71be5587084811c5233295b0bac94aae390be06016" exitCode=0 Dec 11 17:05:33 crc kubenswrapper[5129]: I1211 17:05:33.374293 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" event={"ID":"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b","Type":"ContainerDied","Data":"50d09261c5f8b22521defc71be5587084811c5233295b0bac94aae390be06016"} Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.404817 5129 generic.go:358] "Generic (PLEG): container finished" podID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerID="b7d2b6c1fb19c3d009daf87461ff1f98bf459a031ee9a379f93f56e2371ac28e" exitCode=0 Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.404871 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" event={"ID":"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b","Type":"ContainerDied","Data":"b7d2b6c1fb19c3d009daf87461ff1f98bf459a031ee9a379f93f56e2371ac28e"} Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.946847 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.946925 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.961975 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-ltlxv"] Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.962692 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerName="extract" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.962712 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerName="extract" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.962731 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerName="util" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.962740 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerName="util" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.962759 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerName="pull" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.962766 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerName="pull" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.962882 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="9763ee46-b167-42fa-8115-0b58a4edbb28" containerName="extract" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.966639 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.969180 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.970098 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-6bfg5\"" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.971667 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 11 17:05:38 crc kubenswrapper[5129]: I1211 17:05:38.976506 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-ltlxv"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.090442 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.094652 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.099246 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.099266 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-dcc48\"" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.102093 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcn2x\" (UniqueName: \"kubernetes.io/projected/80e08f4a-c255-448b-a0f7-f40737813f87-kube-api-access-wcn2x\") pod \"obo-prometheus-operator-86648f486b-ltlxv\" (UID: \"80e08f4a-c255-448b-a0f7-f40737813f87\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.104314 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.108745 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.112868 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.117090 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.203679 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wcn2x\" (UniqueName: \"kubernetes.io/projected/80e08f4a-c255-448b-a0f7-f40737813f87-kube-api-access-wcn2x\") pod \"obo-prometheus-operator-86648f486b-ltlxv\" (UID: \"80e08f4a-c255-448b-a0f7-f40737813f87\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.203981 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddafdc3e-6453-4f49-9c40-047037df8090-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd\" (UID: \"ddafdc3e-6453-4f49-9c40-047037df8090\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.204074 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35f09c5b-94e9-467e-9964-b224e43af508-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c\" (UID: \"35f09c5b-94e9-467e-9964-b224e43af508\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.204158 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35f09c5b-94e9-467e-9964-b224e43af508-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c\" (UID: \"35f09c5b-94e9-467e-9964-b224e43af508\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.204249 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddafdc3e-6453-4f49-9c40-047037df8090-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd\" (UID: \"ddafdc3e-6453-4f49-9c40-047037df8090\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.228240 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcn2x\" (UniqueName: \"kubernetes.io/projected/80e08f4a-c255-448b-a0f7-f40737813f87-kube-api-access-wcn2x\") pod \"obo-prometheus-operator-86648f486b-ltlxv\" (UID: \"80e08f4a-c255-448b-a0f7-f40737813f87\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.268914 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-x6lgl"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.274763 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.277199 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-g2kv2\"" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.277502 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.283094 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.286472 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-x6lgl"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.306216 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddafdc3e-6453-4f49-9c40-047037df8090-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd\" (UID: \"ddafdc3e-6453-4f49-9c40-047037df8090\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.306292 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35f09c5b-94e9-467e-9964-b224e43af508-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c\" (UID: \"35f09c5b-94e9-467e-9964-b224e43af508\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.306321 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35f09c5b-94e9-467e-9964-b224e43af508-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c\" (UID: \"35f09c5b-94e9-467e-9964-b224e43af508\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.306364 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddafdc3e-6453-4f49-9c40-047037df8090-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd\" (UID: \"ddafdc3e-6453-4f49-9c40-047037df8090\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.312674 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddafdc3e-6453-4f49-9c40-047037df8090-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd\" (UID: \"ddafdc3e-6453-4f49-9c40-047037df8090\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.312737 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddafdc3e-6453-4f49-9c40-047037df8090-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd\" (UID: \"ddafdc3e-6453-4f49-9c40-047037df8090\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.316778 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35f09c5b-94e9-467e-9964-b224e43af508-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c\" (UID: \"35f09c5b-94e9-467e-9964-b224e43af508\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.323053 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/35f09c5b-94e9-467e-9964-b224e43af508-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c\" (UID: \"35f09c5b-94e9-467e-9964-b224e43af508\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.390867 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dft6m"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.400613 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.409412 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-t6cgj\"" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.410451 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.418315 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/677e34bb-46f8-4ce6-b4c6-6c0cbafb077c-observability-operator-tls\") pod \"observability-operator-78c97476f4-x6lgl\" (UID: \"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c\") " pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.418439 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjq9p\" (UniqueName: \"kubernetes.io/projected/677e34bb-46f8-4ce6-b4c6-6c0cbafb077c-kube-api-access-fjq9p\") pod \"observability-operator-78c97476f4-x6lgl\" (UID: \"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c\") " pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.422825 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.429617 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dft6m"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.470228 5129 generic.go:358] "Generic (PLEG): container finished" podID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerID="369ca2fc7e0b46c8347d48665a71d7320363ea86924831b260934b00e1d82acf" exitCode=0 Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.470400 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" event={"ID":"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b","Type":"ContainerDied","Data":"369ca2fc7e0b46c8347d48665a71d7320363ea86924831b260934b00e1d82acf"} Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.522875 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5c9\" (UniqueName: \"kubernetes.io/projected/ee8a5128-da7d-4046-809a-23b99744f654-kube-api-access-cg5c9\") pod \"perses-operator-68bdb49cbf-dft6m\" (UID: \"ee8a5128-da7d-4046-809a-23b99744f654\") " pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.523152 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/677e34bb-46f8-4ce6-b4c6-6c0cbafb077c-observability-operator-tls\") pod \"observability-operator-78c97476f4-x6lgl\" (UID: \"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c\") " pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.523180 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjq9p\" (UniqueName: \"kubernetes.io/projected/677e34bb-46f8-4ce6-b4c6-6c0cbafb077c-kube-api-access-fjq9p\") pod \"observability-operator-78c97476f4-x6lgl\" (UID: \"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c\") " pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.523204 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8a5128-da7d-4046-809a-23b99744f654-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dft6m\" (UID: \"ee8a5128-da7d-4046-809a-23b99744f654\") " pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.538398 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/677e34bb-46f8-4ce6-b4c6-6c0cbafb077c-observability-operator-tls\") pod \"observability-operator-78c97476f4-x6lgl\" (UID: \"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c\") " pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.542206 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjq9p\" (UniqueName: \"kubernetes.io/projected/677e34bb-46f8-4ce6-b4c6-6c0cbafb077c-kube-api-access-fjq9p\") pod \"observability-operator-78c97476f4-x6lgl\" (UID: \"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c\") " pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.590851 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.603485 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-ltlxv"] Dec 11 17:05:39 crc kubenswrapper[5129]: W1211 17:05:39.612081 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80e08f4a_c255_448b_a0f7_f40737813f87.slice/crio-083f489af3366d8114868f0b0c132a7f482f013640bb1b848b54ced1afa343f8 WatchSource:0}: Error finding container 083f489af3366d8114868f0b0c132a7f482f013640bb1b848b54ced1afa343f8: Status 404 returned error can't find the container with id 083f489af3366d8114868f0b0c132a7f482f013640bb1b848b54ced1afa343f8 Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.624792 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8a5128-da7d-4046-809a-23b99744f654-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dft6m\" (UID: \"ee8a5128-da7d-4046-809a-23b99744f654\") " pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.624854 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cg5c9\" (UniqueName: \"kubernetes.io/projected/ee8a5128-da7d-4046-809a-23b99744f654-kube-api-access-cg5c9\") pod \"perses-operator-68bdb49cbf-dft6m\" (UID: \"ee8a5128-da7d-4046-809a-23b99744f654\") " pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.625959 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ee8a5128-da7d-4046-809a-23b99744f654-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dft6m\" (UID: \"ee8a5128-da7d-4046-809a-23b99744f654\") " pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.642971 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg5c9\" (UniqueName: \"kubernetes.io/projected/ee8a5128-da7d-4046-809a-23b99744f654-kube-api-access-cg5c9\") pod \"perses-operator-68bdb49cbf-dft6m\" (UID: \"ee8a5128-da7d-4046-809a-23b99744f654\") " pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.745315 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd"] Dec 11 17:05:39 crc kubenswrapper[5129]: I1211 17:05:39.759494 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.015075 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dft6m"] Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.025009 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c"] Dec 11 17:05:40 crc kubenswrapper[5129]: W1211 17:05:40.033450 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee8a5128_da7d_4046_809a_23b99744f654.slice/crio-4f3ec2f417fac9e4e195b8af4d32835028b1fe19239a866bc04a08dac10f4359 WatchSource:0}: Error finding container 4f3ec2f417fac9e4e195b8af4d32835028b1fe19239a866bc04a08dac10f4359: Status 404 returned error can't find the container with id 4f3ec2f417fac9e4e195b8af4d32835028b1fe19239a866bc04a08dac10f4359 Dec 11 17:05:40 crc kubenswrapper[5129]: W1211 17:05:40.038607 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35f09c5b_94e9_467e_9964_b224e43af508.slice/crio-8cfdbe6479a50da84b59faec869648e853df72b30bf9a018c7d8e81b8a2f5f6d WatchSource:0}: Error finding container 8cfdbe6479a50da84b59faec869648e853df72b30bf9a018c7d8e81b8a2f5f6d: Status 404 returned error can't find the container with id 8cfdbe6479a50da84b59faec869648e853df72b30bf9a018c7d8e81b8a2f5f6d Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.122462 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-x6lgl"] Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.244395 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7f6c65b7c8-zxl5f"] Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.250923 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.254479 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.254662 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.256743 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.257098 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-vb5tq\"" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.268705 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rxwmh" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.273909 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7f6c65b7c8-zxl5f"] Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.334727 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-apiservice-cert\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.334783 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qzdk\" (UniqueName: \"kubernetes.io/projected/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-kube-api-access-4qzdk\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.334880 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-webhook-cert\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.403609 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-87vjc"] Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.437786 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-apiservice-cert\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.437842 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4qzdk\" (UniqueName: \"kubernetes.io/projected/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-kube-api-access-4qzdk\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.437917 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-webhook-cert\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.444166 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-apiservice-cert\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.459432 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qzdk\" (UniqueName: \"kubernetes.io/projected/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-kube-api-access-4qzdk\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.460580 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e-webhook-cert\") pod \"elastic-operator-7f6c65b7c8-zxl5f\" (UID: \"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e\") " pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.479907 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" event={"ID":"35f09c5b-94e9-467e-9964-b224e43af508","Type":"ContainerStarted","Data":"8cfdbe6479a50da84b59faec869648e853df72b30bf9a018c7d8e81b8a2f5f6d"} Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.488013 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-x6lgl" event={"ID":"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c","Type":"ContainerStarted","Data":"3fb918521a1463ffc3c5bd71b3ae3b7fda37fcb82b24038b0a3003a45e686df9"} Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.489435 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" event={"ID":"80e08f4a-c255-448b-a0f7-f40737813f87","Type":"ContainerStarted","Data":"083f489af3366d8114868f0b0c132a7f482f013640bb1b848b54ced1afa343f8"} Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.490732 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" event={"ID":"ee8a5128-da7d-4046-809a-23b99744f654","Type":"ContainerStarted","Data":"4f3ec2f417fac9e4e195b8af4d32835028b1fe19239a866bc04a08dac10f4359"} Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.491949 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" event={"ID":"ddafdc3e-6453-4f49-9c40-047037df8090","Type":"ContainerStarted","Data":"723f37443365602fefc475aee6f1552c962176bc9bbf630ec4fc9181ad9e21a6"} Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.590792 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" Dec 11 17:05:40 crc kubenswrapper[5129]: I1211 17:05:40.906166 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.045662 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tblv\" (UniqueName: \"kubernetes.io/projected/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-kube-api-access-9tblv\") pod \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.045772 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-bundle\") pod \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.045886 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-util\") pod \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\" (UID: \"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b\") " Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.046765 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-bundle" (OuterVolumeSpecName: "bundle") pod "d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" (UID: "d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.050672 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-kube-api-access-9tblv" (OuterVolumeSpecName: "kube-api-access-9tblv") pod "d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" (UID: "d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b"). InnerVolumeSpecName "kube-api-access-9tblv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.061832 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-util" (OuterVolumeSpecName: "util") pod "d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" (UID: "d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.108029 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7f6c65b7c8-zxl5f"] Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.147235 5129 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.147582 5129 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-util\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.147591 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9tblv\" (UniqueName: \"kubernetes.io/projected/d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b-kube-api-access-9tblv\") on node \"crc\" DevicePath \"\"" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.503706 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" event={"ID":"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e","Type":"ContainerStarted","Data":"95a13ac5650ead7df5dda72f5d38c3bb72e1969f7e28a7500720a28153ad2339"} Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.507660 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" event={"ID":"d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b","Type":"ContainerDied","Data":"2d163b065ee92e9c9c480e85300c67ccb7413066bec9457214c662c987b90397"} Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.507710 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abw98c" Dec 11 17:05:41 crc kubenswrapper[5129]: I1211 17:05:41.507751 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d163b065ee92e9c9c480e85300c67ccb7413066bec9457214c662c987b90397" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.709382 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd"] Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.710399 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerName="util" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.710431 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerName="util" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.710452 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerName="pull" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.710457 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerName="pull" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.710477 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerName="extract" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.710482 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerName="extract" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.710612 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7306a6f-a1ea-4e9a-9948-fe7fe019ca5b" containerName="extract" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.715644 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.718226 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.718532 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.718666 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-6t8d6\"" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.724047 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd"] Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.787606 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51e37673-3ae2-4d02-9712-2049e2dc5f98-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-z2qzd\" (UID: \"51e37673-3ae2-4d02-9712-2049e2dc5f98\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.787724 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49vw\" (UniqueName: \"kubernetes.io/projected/51e37673-3ae2-4d02-9712-2049e2dc5f98-kube-api-access-h49vw\") pod \"cert-manager-operator-controller-manager-64c74584c4-z2qzd\" (UID: \"51e37673-3ae2-4d02-9712-2049e2dc5f98\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.888854 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51e37673-3ae2-4d02-9712-2049e2dc5f98-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-z2qzd\" (UID: \"51e37673-3ae2-4d02-9712-2049e2dc5f98\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.888949 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h49vw\" (UniqueName: \"kubernetes.io/projected/51e37673-3ae2-4d02-9712-2049e2dc5f98-kube-api-access-h49vw\") pod \"cert-manager-operator-controller-manager-64c74584c4-z2qzd\" (UID: \"51e37673-3ae2-4d02-9712-2049e2dc5f98\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.889626 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51e37673-3ae2-4d02-9712-2049e2dc5f98-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-z2qzd\" (UID: \"51e37673-3ae2-4d02-9712-2049e2dc5f98\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:52 crc kubenswrapper[5129]: I1211 17:05:52.928896 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49vw\" (UniqueName: \"kubernetes.io/projected/51e37673-3ae2-4d02-9712-2049e2dc5f98-kube-api-access-h49vw\") pod \"cert-manager-operator-controller-manager-64c74584c4-z2qzd\" (UID: \"51e37673-3ae2-4d02-9712-2049e2dc5f98\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:53 crc kubenswrapper[5129]: I1211 17:05:53.041417 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" Dec 11 17:05:54 crc kubenswrapper[5129]: I1211 17:05:54.896139 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd"] Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.678290 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" event={"ID":"80e08f4a-c255-448b-a0f7-f40737813f87","Type":"ContainerStarted","Data":"6b6c599591f185547ea54d753b1fd98cc29f5a9b1ab03358910b9aafbf3089b2"} Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.681257 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" event={"ID":"1fe63aa4-4ed5-437e-a5b2-8f18bc47e11e","Type":"ContainerStarted","Data":"8972790b7b37218419e49a78de40ac6207fd2932b0541c570dc635ad1f09c450"} Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.688884 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" event={"ID":"ee8a5128-da7d-4046-809a-23b99744f654","Type":"ContainerStarted","Data":"13b1d8526b827fedaee22a1e570a70b74f258aee5b2992c0c780396aefcb89cc"} Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.688956 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.690631 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" event={"ID":"ddafdc3e-6453-4f49-9c40-047037df8090","Type":"ContainerStarted","Data":"e09be8dca169196c85de949d8937e1adf27f354336cd3de3e82fb9aa67c19d78"} Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.692557 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" event={"ID":"51e37673-3ae2-4d02-9712-2049e2dc5f98","Type":"ContainerStarted","Data":"5f9cf4b014a72c7a7075413a8768b7794fce912b531b5608ee1088df217796e3"} Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.695292 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" event={"ID":"35f09c5b-94e9-467e-9964-b224e43af508","Type":"ContainerStarted","Data":"6a1abeb707318001a8a3fe9b4e75eccfaf21a1294083fa56fd7aa66c849ff384"} Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.698866 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-ltlxv" podStartSLOduration=2.640416218 podStartE2EDuration="17.698854212s" podCreationTimestamp="2025-12-11 17:05:38 +0000 UTC" firstStartedPulling="2025-12-11 17:05:39.61439694 +0000 UTC m=+683.417926957" lastFinishedPulling="2025-12-11 17:05:54.672834934 +0000 UTC m=+698.476364951" observedRunningTime="2025-12-11 17:05:55.695944982 +0000 UTC m=+699.499474999" watchObservedRunningTime="2025-12-11 17:05:55.698854212 +0000 UTC m=+699.502384229" Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.721226 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" podStartSLOduration=2.118183108 podStartE2EDuration="16.721211043s" podCreationTimestamp="2025-12-11 17:05:39 +0000 UTC" firstStartedPulling="2025-12-11 17:05:40.043719902 +0000 UTC m=+683.847249919" lastFinishedPulling="2025-12-11 17:05:54.646747827 +0000 UTC m=+698.450277854" observedRunningTime="2025-12-11 17:05:55.720446111 +0000 UTC m=+699.523976128" watchObservedRunningTime="2025-12-11 17:05:55.721211043 +0000 UTC m=+699.524741060" Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.761926 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-wxp2c" podStartSLOduration=2.139150076 podStartE2EDuration="16.761913562s" podCreationTimestamp="2025-12-11 17:05:39 +0000 UTC" firstStartedPulling="2025-12-11 17:05:40.041682379 +0000 UTC m=+683.845212386" lastFinishedPulling="2025-12-11 17:05:54.664445845 +0000 UTC m=+698.467975872" observedRunningTime="2025-12-11 17:05:55.759635002 +0000 UTC m=+699.563165019" watchObservedRunningTime="2025-12-11 17:05:55.761913562 +0000 UTC m=+699.565443579" Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.763906 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-ff85bd4b6-drrbd" podStartSLOduration=1.816064068 podStartE2EDuration="16.763898343s" podCreationTimestamp="2025-12-11 17:05:39 +0000 UTC" firstStartedPulling="2025-12-11 17:05:39.767150312 +0000 UTC m=+683.570680329" lastFinishedPulling="2025-12-11 17:05:54.714984587 +0000 UTC m=+698.518514604" observedRunningTime="2025-12-11 17:05:55.742304146 +0000 UTC m=+699.545834163" watchObservedRunningTime="2025-12-11 17:05:55.763898343 +0000 UTC m=+699.567428360" Dec 11 17:05:55 crc kubenswrapper[5129]: I1211 17:05:55.778262 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7f6c65b7c8-zxl5f" podStartSLOduration=2.405579578 podStartE2EDuration="15.778249617s" podCreationTimestamp="2025-12-11 17:05:40 +0000 UTC" firstStartedPulling="2025-12-11 17:05:41.181086653 +0000 UTC m=+684.984616670" lastFinishedPulling="2025-12-11 17:05:54.553756692 +0000 UTC m=+698.357286709" observedRunningTime="2025-12-11 17:05:55.776806673 +0000 UTC m=+699.580336690" watchObservedRunningTime="2025-12-11 17:05:55.778249617 +0000 UTC m=+699.581779634" Dec 11 17:06:01 crc kubenswrapper[5129]: I1211 17:06:01.730686 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-x6lgl" event={"ID":"677e34bb-46f8-4ce6-b4c6-6c0cbafb077c","Type":"ContainerStarted","Data":"bf92518381e75e9daaa3e7248bb52dfd63363028db03b75a1cb95fbc4cbccb24"} Dec 11 17:06:01 crc kubenswrapper[5129]: I1211 17:06:01.731105 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:06:01 crc kubenswrapper[5129]: I1211 17:06:01.734145 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" event={"ID":"51e37673-3ae2-4d02-9712-2049e2dc5f98","Type":"ContainerStarted","Data":"ca7fd6c0cbeb86cd9a1f439a2c002fb41d12817031d121086d936f2622d40129"} Dec 11 17:06:01 crc kubenswrapper[5129]: I1211 17:06:01.753882 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-x6lgl" podStartSLOduration=1.782610215 podStartE2EDuration="22.75386064s" podCreationTimestamp="2025-12-11 17:05:39 +0000 UTC" firstStartedPulling="2025-12-11 17:05:40.131729443 +0000 UTC m=+683.935259460" lastFinishedPulling="2025-12-11 17:06:01.102979878 +0000 UTC m=+704.906509885" observedRunningTime="2025-12-11 17:06:01.74903458 +0000 UTC m=+705.552564607" watchObservedRunningTime="2025-12-11 17:06:01.75386064 +0000 UTC m=+705.557390657" Dec 11 17:06:01 crc kubenswrapper[5129]: I1211 17:06:01.767654 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-x6lgl" Dec 11 17:06:01 crc kubenswrapper[5129]: I1211 17:06:01.777070 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-z2qzd" podStartSLOduration=3.6009214050000002 podStartE2EDuration="9.777054116s" podCreationTimestamp="2025-12-11 17:05:52 +0000 UTC" firstStartedPulling="2025-12-11 17:05:54.912816413 +0000 UTC m=+698.716346430" lastFinishedPulling="2025-12-11 17:06:01.088949124 +0000 UTC m=+704.892479141" observedRunningTime="2025-12-11 17:06:01.771324939 +0000 UTC m=+705.574854966" watchObservedRunningTime="2025-12-11 17:06:01.777054116 +0000 UTC m=+705.580584133" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.162531 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.173013 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179199 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179271 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179299 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179317 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179404 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179463 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179504 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179537 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/459a4b85-fc93-4395-8cd2-78bcd2dc4138-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179565 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179582 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179614 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179656 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179673 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179811 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.179833 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.181116 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.181168 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.181399 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.181472 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.181671 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.181673 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.181728 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.182354 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.182812 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-rwj67\"" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.198116 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340588 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/459a4b85-fc93-4395-8cd2-78bcd2dc4138-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340657 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340685 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340723 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340772 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340793 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340890 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.340915 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.341008 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.341053 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.341100 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.341123 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.341156 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.341206 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.341255 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.342056 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.345866 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.345866 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.346178 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.348556 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.351742 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.352955 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.353017 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.354775 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.354952 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.355787 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/459a4b85-fc93-4395-8cd2-78bcd2dc4138-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.356228 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/459a4b85-fc93-4395-8cd2-78bcd2dc4138-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.356457 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.356546 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.369215 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/459a4b85-fc93-4395-8cd2-78bcd2dc4138-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"459a4b85-fc93-4395-8cd2-78bcd2dc4138\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.491588 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:03 crc kubenswrapper[5129]: I1211 17:06:03.955724 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 17:06:03 crc kubenswrapper[5129]: W1211 17:06:03.969934 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod459a4b85_fc93_4395_8cd2_78bcd2dc4138.slice/crio-77bb962ee32a192fc302779ccdb80b2f7c3f91929e6c44a4bde9acc23a9d66f7 WatchSource:0}: Error finding container 77bb962ee32a192fc302779ccdb80b2f7c3f91929e6c44a4bde9acc23a9d66f7: Status 404 returned error can't find the container with id 77bb962ee32a192fc302779ccdb80b2f7c3f91929e6c44a4bde9acc23a9d66f7 Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.179133 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-t447m"] Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.232054 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-t447m"] Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.232228 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.235735 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.235869 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.236770 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-hp8f2\"" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.253440 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99lzg\" (UniqueName: \"kubernetes.io/projected/2de3170a-9ba8-4172-9d8a-d39a9f5e5699-kube-api-access-99lzg\") pod \"cert-manager-webhook-7894b5b9b4-t447m\" (UID: \"2de3170a-9ba8-4172-9d8a-d39a9f5e5699\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.253828 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2de3170a-9ba8-4172-9d8a-d39a9f5e5699-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-t447m\" (UID: \"2de3170a-9ba8-4172-9d8a-d39a9f5e5699\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.355710 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2de3170a-9ba8-4172-9d8a-d39a9f5e5699-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-t447m\" (UID: \"2de3170a-9ba8-4172-9d8a-d39a9f5e5699\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.355783 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-99lzg\" (UniqueName: \"kubernetes.io/projected/2de3170a-9ba8-4172-9d8a-d39a9f5e5699-kube-api-access-99lzg\") pod \"cert-manager-webhook-7894b5b9b4-t447m\" (UID: \"2de3170a-9ba8-4172-9d8a-d39a9f5e5699\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.379354 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2de3170a-9ba8-4172-9d8a-d39a9f5e5699-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-t447m\" (UID: \"2de3170a-9ba8-4172-9d8a-d39a9f5e5699\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.385212 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-99lzg\" (UniqueName: \"kubernetes.io/projected/2de3170a-9ba8-4172-9d8a-d39a9f5e5699-kube-api-access-99lzg\") pod \"cert-manager-webhook-7894b5b9b4-t447m\" (UID: \"2de3170a-9ba8-4172-9d8a-d39a9f5e5699\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.553603 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:04 crc kubenswrapper[5129]: I1211 17:06:04.752395 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"459a4b85-fc93-4395-8cd2-78bcd2dc4138","Type":"ContainerStarted","Data":"77bb962ee32a192fc302779ccdb80b2f7c3f91929e6c44a4bde9acc23a9d66f7"} Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.076395 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-t447m"] Dec 11 17:06:05 crc kubenswrapper[5129]: W1211 17:06:05.121258 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2de3170a_9ba8_4172_9d8a_d39a9f5e5699.slice/crio-977a7774e879bd8e51f58a2b01480391f4d4c77461583143a98efef8c6dbd90b WatchSource:0}: Error finding container 977a7774e879bd8e51f58a2b01480391f4d4c77461583143a98efef8c6dbd90b: Status 404 returned error can't find the container with id 977a7774e879bd8e51f58a2b01480391f4d4c77461583143a98efef8c6dbd90b Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.442734 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" podUID="771edef5-cdf9-463f-8fa5-824e3d0f0f0d" containerName="registry" containerID="cri-o://c0ad7fad2509048a88ed886cb637f4a9d49f8c063e2c428bb01e66e5b015fa1b" gracePeriod=30 Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.767833 5129 generic.go:358] "Generic (PLEG): container finished" podID="771edef5-cdf9-463f-8fa5-824e3d0f0f0d" containerID="c0ad7fad2509048a88ed886cb637f4a9d49f8c063e2c428bb01e66e5b015fa1b" exitCode=0 Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.767914 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" event={"ID":"771edef5-cdf9-463f-8fa5-824e3d0f0f0d","Type":"ContainerDied","Data":"c0ad7fad2509048a88ed886cb637f4a9d49f8c063e2c428bb01e66e5b015fa1b"} Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.769188 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" event={"ID":"2de3170a-9ba8-4172-9d8a-d39a9f5e5699","Type":"ContainerStarted","Data":"977a7774e879bd8e51f58a2b01480391f4d4c77461583143a98efef8c6dbd90b"} Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.856492 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.923699 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jpn\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-kube-api-access-s5jpn\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.923767 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-installation-pull-secrets\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.924260 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.924315 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-trusted-ca\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.924406 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-ca-trust-extracted\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.924471 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-certificates\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.924572 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-tls\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.924634 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-bound-sa-token\") pod \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\" (UID: \"771edef5-cdf9-463f-8fa5-824e3d0f0f0d\") " Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.927121 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.926832 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.930750 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.931078 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.933746 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.937049 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-kube-api-access-s5jpn" (OuterVolumeSpecName: "kube-api-access-s5jpn") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "kube-api-access-s5jpn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.937642 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 11 17:06:05 crc kubenswrapper[5129]: I1211 17:06:05.954069 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "771edef5-cdf9-463f-8fa5-824e3d0f0f0d" (UID: "771edef5-cdf9-463f-8fa5-824e3d0f0f0d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.025522 5129 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.025565 5129 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.025579 5129 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.025590 5129 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.025601 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s5jpn\" (UniqueName: \"kubernetes.io/projected/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-kube-api-access-s5jpn\") on node \"crc\" DevicePath \"\"" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.025612 5129 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.025623 5129 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/771edef5-cdf9-463f-8fa5-824e3d0f0f0d-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.708080 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-dft6m" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.777947 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.777963 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-87vjc" event={"ID":"771edef5-cdf9-463f-8fa5-824e3d0f0f0d","Type":"ContainerDied","Data":"b7b1baf791d386c8e2e51d614395825d7a73836b5675ad7130b39ef99de804b2"} Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.778037 5129 scope.go:117] "RemoveContainer" containerID="c0ad7fad2509048a88ed886cb637f4a9d49f8c063e2c428bb01e66e5b015fa1b" Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.796491 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-87vjc"] Dec 11 17:06:06 crc kubenswrapper[5129]: I1211 17:06:06.802623 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-87vjc"] Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.051955 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf"] Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.053149 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="771edef5-cdf9-463f-8fa5-824e3d0f0f0d" containerName="registry" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.053173 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="771edef5-cdf9-463f-8fa5-824e3d0f0f0d" containerName="registry" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.053343 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="771edef5-cdf9-463f-8fa5-824e3d0f0f0d" containerName="registry" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.068968 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.071457 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-5qpqr\"" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.076938 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf"] Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.155113 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cba7e73d-4a66-4b54-a6b4-aa3c1259330c-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-8khtf\" (UID: \"cba7e73d-4a66-4b54-a6b4-aa3c1259330c\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.155178 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zvkr\" (UniqueName: \"kubernetes.io/projected/cba7e73d-4a66-4b54-a6b4-aa3c1259330c-kube-api-access-9zvkr\") pod \"cert-manager-cainjector-7dbf76d5c8-8khtf\" (UID: \"cba7e73d-4a66-4b54-a6b4-aa3c1259330c\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.256921 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cba7e73d-4a66-4b54-a6b4-aa3c1259330c-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-8khtf\" (UID: \"cba7e73d-4a66-4b54-a6b4-aa3c1259330c\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.256993 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zvkr\" (UniqueName: \"kubernetes.io/projected/cba7e73d-4a66-4b54-a6b4-aa3c1259330c-kube-api-access-9zvkr\") pod \"cert-manager-cainjector-7dbf76d5c8-8khtf\" (UID: \"cba7e73d-4a66-4b54-a6b4-aa3c1259330c\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.274833 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zvkr\" (UniqueName: \"kubernetes.io/projected/cba7e73d-4a66-4b54-a6b4-aa3c1259330c-kube-api-access-9zvkr\") pod \"cert-manager-cainjector-7dbf76d5c8-8khtf\" (UID: \"cba7e73d-4a66-4b54-a6b4-aa3c1259330c\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.275353 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cba7e73d-4a66-4b54-a6b4-aa3c1259330c-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-8khtf\" (UID: \"cba7e73d-4a66-4b54-a6b4-aa3c1259330c\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.384065 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.559434 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771edef5-cdf9-463f-8fa5-824e3d0f0f0d" path="/var/lib/kubelet/pods/771edef5-cdf9-463f-8fa5-824e3d0f0f0d/volumes" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.948154 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.948242 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.948291 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.948991 5129 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a11ce0f7bc15e595347b96471f3f4b914409e097a5439477166064a982bf74b"} pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 17:06:08 crc kubenswrapper[5129]: I1211 17:06:08.949066 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" containerID="cri-o://8a11ce0f7bc15e595347b96471f3f4b914409e097a5439477166064a982bf74b" gracePeriod=600 Dec 11 17:06:09 crc kubenswrapper[5129]: I1211 17:06:09.879862 5129 generic.go:358] "Generic (PLEG): container finished" podID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerID="8a11ce0f7bc15e595347b96471f3f4b914409e097a5439477166064a982bf74b" exitCode=0 Dec 11 17:06:09 crc kubenswrapper[5129]: I1211 17:06:09.880384 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerDied","Data":"8a11ce0f7bc15e595347b96471f3f4b914409e097a5439477166064a982bf74b"} Dec 11 17:06:09 crc kubenswrapper[5129]: I1211 17:06:09.880504 5129 scope.go:117] "RemoveContainer" containerID="6dc09ad4273c6049f3cdd75c94f381f5b1081c1912d30fe7d468b4b5a0e805e7" Dec 11 17:06:13 crc kubenswrapper[5129]: I1211 17:06:13.436434 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-lq6cf"] Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.125144 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-lq6cf"] Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.125319 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.132996 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-q68jr\"" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.212204 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fn8\" (UniqueName: \"kubernetes.io/projected/c76171bb-ceb0-402b-b507-fd7818ea606d-kube-api-access-g9fn8\") pod \"cert-manager-858d87f86b-lq6cf\" (UID: \"c76171bb-ceb0-402b-b507-fd7818ea606d\") " pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.212319 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c76171bb-ceb0-402b-b507-fd7818ea606d-bound-sa-token\") pod \"cert-manager-858d87f86b-lq6cf\" (UID: \"c76171bb-ceb0-402b-b507-fd7818ea606d\") " pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.313926 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fn8\" (UniqueName: \"kubernetes.io/projected/c76171bb-ceb0-402b-b507-fd7818ea606d-kube-api-access-g9fn8\") pod \"cert-manager-858d87f86b-lq6cf\" (UID: \"c76171bb-ceb0-402b-b507-fd7818ea606d\") " pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.314039 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c76171bb-ceb0-402b-b507-fd7818ea606d-bound-sa-token\") pod \"cert-manager-858d87f86b-lq6cf\" (UID: \"c76171bb-ceb0-402b-b507-fd7818ea606d\") " pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.337024 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c76171bb-ceb0-402b-b507-fd7818ea606d-bound-sa-token\") pod \"cert-manager-858d87f86b-lq6cf\" (UID: \"c76171bb-ceb0-402b-b507-fd7818ea606d\") " pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.337761 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fn8\" (UniqueName: \"kubernetes.io/projected/c76171bb-ceb0-402b-b507-fd7818ea606d-kube-api-access-g9fn8\") pod \"cert-manager-858d87f86b-lq6cf\" (UID: \"c76171bb-ceb0-402b-b507-fd7818ea606d\") " pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:14 crc kubenswrapper[5129]: I1211 17:06:14.441081 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-lq6cf" Dec 11 17:06:20 crc kubenswrapper[5129]: I1211 17:06:20.165498 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf"] Dec 11 17:06:27 crc kubenswrapper[5129]: W1211 17:06:27.063024 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcba7e73d_4a66_4b54_a6b4_aa3c1259330c.slice/crio-f5ed9420aeb9131531eb46228b07d6ae5e574b62b8c36ef79b8123960f757689 WatchSource:0}: Error finding container f5ed9420aeb9131531eb46228b07d6ae5e574b62b8c36ef79b8123960f757689: Status 404 returned error can't find the container with id f5ed9420aeb9131531eb46228b07d6ae5e574b62b8c36ef79b8123960f757689 Dec 11 17:06:28 crc kubenswrapper[5129]: I1211 17:06:28.001379 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" event={"ID":"cba7e73d-4a66-4b54-a6b4-aa3c1259330c","Type":"ContainerStarted","Data":"f5ed9420aeb9131531eb46228b07d6ae5e574b62b8c36ef79b8123960f757689"} Dec 11 17:06:29 crc kubenswrapper[5129]: I1211 17:06:29.032750 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"72b323fbfa03c76e16a553147e53e05b8f4d9018a8b65ccba3bfb2ee0d9e02ed"} Dec 11 17:06:29 crc kubenswrapper[5129]: I1211 17:06:29.249050 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-lq6cf"] Dec 11 17:06:29 crc kubenswrapper[5129]: W1211 17:06:29.360718 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc76171bb_ceb0_402b_b507_fd7818ea606d.slice/crio-88b23c51a4eae1c46e1d7dbe88ef8d5365cd57f9c607c10dd697da4e7cf464b8 WatchSource:0}: Error finding container 88b23c51a4eae1c46e1d7dbe88ef8d5365cd57f9c607c10dd697da4e7cf464b8: Status 404 returned error can't find the container with id 88b23c51a4eae1c46e1d7dbe88ef8d5365cd57f9c607c10dd697da4e7cf464b8 Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.039035 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" event={"ID":"2de3170a-9ba8-4172-9d8a-d39a9f5e5699","Type":"ContainerStarted","Data":"52b0dffebfbf54746d8110c9e534c8b0a7fc89fef7d95bc22d70f2d2341d7bdf"} Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.039476 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.042979 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" event={"ID":"cba7e73d-4a66-4b54-a6b4-aa3c1259330c","Type":"ContainerStarted","Data":"5f1b6aa418a03114fee327576171a818dba8a64d1fbbec232ba7cc0c8476d8c5"} Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.045001 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-lq6cf" event={"ID":"c76171bb-ceb0-402b-b507-fd7818ea606d","Type":"ContainerStarted","Data":"eb74a1bc24da45135bd7964663a6e0f1e00689879624200b5d6e56fac96e56bf"} Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.045056 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-lq6cf" event={"ID":"c76171bb-ceb0-402b-b507-fd7818ea606d","Type":"ContainerStarted","Data":"88b23c51a4eae1c46e1d7dbe88ef8d5365cd57f9c607c10dd697da4e7cf464b8"} Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.047384 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"459a4b85-fc93-4395-8cd2-78bcd2dc4138","Type":"ContainerStarted","Data":"443e19ae39e41ecacd79193348be52eef04da3875547b1f439aed983ce73f852"} Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.059606 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" podStartSLOduration=1.790267462 podStartE2EDuration="26.059588196s" podCreationTimestamp="2025-12-11 17:06:04 +0000 UTC" firstStartedPulling="2025-12-11 17:06:05.126987648 +0000 UTC m=+708.930517665" lastFinishedPulling="2025-12-11 17:06:29.396308372 +0000 UTC m=+733.199838399" observedRunningTime="2025-12-11 17:06:30.058617936 +0000 UTC m=+733.862147963" watchObservedRunningTime="2025-12-11 17:06:30.059588196 +0000 UTC m=+733.863118213" Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.078871 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-8khtf" podStartSLOduration=19.749758922 podStartE2EDuration="22.078852596s" podCreationTimestamp="2025-12-11 17:06:08 +0000 UTC" firstStartedPulling="2025-12-11 17:06:27.066761723 +0000 UTC m=+730.870291740" lastFinishedPulling="2025-12-11 17:06:29.395855387 +0000 UTC m=+733.199385414" observedRunningTime="2025-12-11 17:06:30.075671097 +0000 UTC m=+733.879201114" watchObservedRunningTime="2025-12-11 17:06:30.078852596 +0000 UTC m=+733.882382613" Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.138705 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-lq6cf" podStartSLOduration=17.138690919 podStartE2EDuration="17.138690919s" podCreationTimestamp="2025-12-11 17:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 17:06:30.135489359 +0000 UTC m=+733.939019376" watchObservedRunningTime="2025-12-11 17:06:30.138690919 +0000 UTC m=+733.942220936" Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.232799 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 17:06:30 crc kubenswrapper[5129]: I1211 17:06:30.269806 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 17:06:32 crc kubenswrapper[5129]: I1211 17:06:32.063157 5129 generic.go:358] "Generic (PLEG): container finished" podID="459a4b85-fc93-4395-8cd2-78bcd2dc4138" containerID="443e19ae39e41ecacd79193348be52eef04da3875547b1f439aed983ce73f852" exitCode=0 Dec 11 17:06:32 crc kubenswrapper[5129]: I1211 17:06:32.063299 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"459a4b85-fc93-4395-8cd2-78bcd2dc4138","Type":"ContainerDied","Data":"443e19ae39e41ecacd79193348be52eef04da3875547b1f439aed983ce73f852"} Dec 11 17:06:33 crc kubenswrapper[5129]: I1211 17:06:33.089282 5129 generic.go:358] "Generic (PLEG): container finished" podID="459a4b85-fc93-4395-8cd2-78bcd2dc4138" containerID="1cd9f6b0e7d0f05819d7bff60a0349947c56b363b802bd9af60f527dd0358faf" exitCode=0 Dec 11 17:06:33 crc kubenswrapper[5129]: I1211 17:06:33.089424 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"459a4b85-fc93-4395-8cd2-78bcd2dc4138","Type":"ContainerDied","Data":"1cd9f6b0e7d0f05819d7bff60a0349947c56b363b802bd9af60f527dd0358faf"} Dec 11 17:06:34 crc kubenswrapper[5129]: I1211 17:06:34.099698 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"459a4b85-fc93-4395-8cd2-78bcd2dc4138","Type":"ContainerStarted","Data":"982d19de59cb960d8246730c09fe252d48e9d0eeb894d730c78c7a976b26f289"} Dec 11 17:06:34 crc kubenswrapper[5129]: I1211 17:06:34.100002 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:34 crc kubenswrapper[5129]: I1211 17:06:34.142305 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=5.653747272 podStartE2EDuration="31.142281961s" podCreationTimestamp="2025-12-11 17:06:03 +0000 UTC" firstStartedPulling="2025-12-11 17:06:03.972289491 +0000 UTC m=+707.775819518" lastFinishedPulling="2025-12-11 17:06:29.46082417 +0000 UTC m=+733.264354207" observedRunningTime="2025-12-11 17:06:34.139869056 +0000 UTC m=+737.943399093" watchObservedRunningTime="2025-12-11 17:06:34.142281961 +0000 UTC m=+737.945811978" Dec 11 17:06:36 crc kubenswrapper[5129]: I1211 17:06:36.057706 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-t447m" Dec 11 17:06:45 crc kubenswrapper[5129]: I1211 17:06:45.194294 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="459a4b85-fc93-4395-8cd2-78bcd2dc4138" containerName="elasticsearch" probeResult="failure" output=< Dec 11 17:06:45 crc kubenswrapper[5129]: {"timestamp": "2025-12-11T17:06:45+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 11 17:06:45 crc kubenswrapper[5129]: > Dec 11 17:06:50 crc kubenswrapper[5129]: I1211 17:06:50.206986 5129 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="459a4b85-fc93-4395-8cd2-78bcd2dc4138" containerName="elasticsearch" probeResult="failure" output=< Dec 11 17:06:50 crc kubenswrapper[5129]: {"timestamp": "2025-12-11T17:06:50+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 11 17:06:50 crc kubenswrapper[5129]: > Dec 11 17:06:55 crc kubenswrapper[5129]: I1211 17:06:55.327144 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 17:06:58 crc kubenswrapper[5129]: I1211 17:06:58.910563 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.263418 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.263614 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.265784 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.266155 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.267730 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.269912 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-wlpf2\"" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.270813 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.390840 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.390931 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391085 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391160 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391351 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391442 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391482 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391605 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391722 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391752 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391803 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391851 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5jzf\" (UniqueName: \"kubernetes.io/projected/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-kube-api-access-c5jzf\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.391927 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494136 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494763 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494793 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494818 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494865 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494899 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494808 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495063 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495204 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.494918 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495213 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495323 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495415 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495471 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495621 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495645 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495704 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495722 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c5jzf\" (UniqueName: \"kubernetes.io/projected/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-kube-api-access-c5jzf\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.495752 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.496158 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.496583 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.496832 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.503353 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.503829 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.510142 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.518583 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5jzf\" (UniqueName: \"kubernetes.io/projected/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-kube-api-access-c5jzf\") pod \"service-telemetry-framework-index-1-build\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.585539 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:06:59 crc kubenswrapper[5129]: I1211 17:06:59.818421 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 17:06:59 crc kubenswrapper[5129]: W1211 17:06:59.818637 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ce8d122_97aa_4f2e_9bc9_de07bc5913b2.slice/crio-501b0056a695239dcd5f5b375db40d825b2e438dce73227849fdbc3f9844f44b WatchSource:0}: Error finding container 501b0056a695239dcd5f5b375db40d825b2e438dce73227849fdbc3f9844f44b: Status 404 returned error can't find the container with id 501b0056a695239dcd5f5b375db40d825b2e438dce73227849fdbc3f9844f44b Dec 11 17:07:00 crc kubenswrapper[5129]: I1211 17:07:00.278672 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2","Type":"ContainerStarted","Data":"501b0056a695239dcd5f5b375db40d825b2e438dce73227849fdbc3f9844f44b"} Dec 11 17:07:10 crc kubenswrapper[5129]: I1211 17:07:10.361975 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2","Type":"ContainerStarted","Data":"b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b"} Dec 11 17:07:10 crc kubenswrapper[5129]: I1211 17:07:10.440832 5129 ???:1] "http: TLS handshake error from 192.168.126.11:53000: no serving certificate available for the kubelet" Dec 11 17:07:11 crc kubenswrapper[5129]: I1211 17:07:11.480695 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.376836 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" containerName="git-clone" containerID="cri-o://b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b" gracePeriod=30 Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.829048 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_4ce8d122-97aa-4f2e-9bc9-de07bc5913b2/git-clone/0.log" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.829132 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.884773 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5jzf\" (UniqueName: \"kubernetes.io/projected/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-kube-api-access-c5jzf\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.884821 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-pull\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.884946 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-run\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885081 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-root\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885128 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-node-pullsecrets\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885153 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-system-configs\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885213 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-proxy-ca-bundles\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885286 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildcachedir\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885300 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885347 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885353 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildworkdir\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885394 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885497 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885506 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-push\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885595 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885630 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-blob-cache\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885706 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885743 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885780 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-ca-bundles\") pod \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\" (UID: \"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2\") " Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.885851 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886064 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886365 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886447 5129 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886484 5129 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886503 5129 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886549 5129 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886567 5129 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886583 5129 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886602 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.886619 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.891904 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-pull" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-pull") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "builder-dockercfg-wlpf2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.891917 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-push" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-push") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "builder-dockercfg-wlpf2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.892580 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.894580 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-kube-api-access-c5jzf" (OuterVolumeSpecName: "kube-api-access-c5jzf") pod "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" (UID: "4ce8d122-97aa-4f2e-9bc9-de07bc5913b2"). InnerVolumeSpecName "kube-api-access-c5jzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.987592 5129 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.987622 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c5jzf\" (UniqueName: \"kubernetes.io/projected/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-kube-api-access-c5jzf\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.987636 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-pull\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.987645 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-builder-dockercfg-wlpf2-push\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:12 crc kubenswrapper[5129]: I1211 17:07:12.987656 5129 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.387745 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_4ce8d122-97aa-4f2e-9bc9-de07bc5913b2/git-clone/0.log" Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.387803 5129 generic.go:358] "Generic (PLEG): container finished" podID="4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" containerID="b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b" exitCode=1 Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.387988 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.387997 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2","Type":"ContainerDied","Data":"b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b"} Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.388067 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"4ce8d122-97aa-4f2e-9bc9-de07bc5913b2","Type":"ContainerDied","Data":"501b0056a695239dcd5f5b375db40d825b2e438dce73227849fdbc3f9844f44b"} Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.388103 5129 scope.go:117] "RemoveContainer" containerID="b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b" Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.413173 5129 scope.go:117] "RemoveContainer" containerID="b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b" Dec 11 17:07:13 crc kubenswrapper[5129]: E1211 17:07:13.413996 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b\": container with ID starting with b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b not found: ID does not exist" containerID="b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b" Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.414032 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b"} err="failed to get container status \"b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b\": rpc error: code = NotFound desc = could not find container \"b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b\": container with ID starting with b1ea084148b6e02b514a47bc61bb8d3ff6e86a9db7719ae322a21ad7bb8e798b not found: ID does not exist" Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.427838 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 17:07:13 crc kubenswrapper[5129]: I1211 17:07:13.433010 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 17:07:14 crc kubenswrapper[5129]: I1211 17:07:14.529120 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" path="/var/lib/kubelet/pods/4ce8d122-97aa-4f2e-9bc9-de07bc5913b2/volumes" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.961072 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.962374 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" containerName="git-clone" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.962393 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" containerName="git-clone" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.962621 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ce8d122-97aa-4f2e-9bc9-de07bc5913b2" containerName="git-clone" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.979293 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.983971 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-sys-config\"" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.984029 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-ca\"" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.984102 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-wlpf2\"" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.984029 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-global-ca\"" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.985466 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 17:07:22 crc kubenswrapper[5129]: I1211 17:07:22.989582 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033283 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sh8w\" (UniqueName: \"kubernetes.io/projected/54451a23-34ce-436d-8965-2cf1d4728bfe-kube-api-access-2sh8w\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033380 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033435 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033526 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033549 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033577 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033599 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033674 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033830 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033883 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033909 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.033967 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.034004 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136064 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136175 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136204 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136241 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136272 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136299 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2sh8w\" (UniqueName: \"kubernetes.io/projected/54451a23-34ce-436d-8965-2cf1d4728bfe-kube-api-access-2sh8w\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136304 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136503 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136765 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136946 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.136982 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137090 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137106 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137281 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137345 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137428 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137489 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137735 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.138201 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.138295 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.137740 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.139153 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.148005 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.148049 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.148404 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.161079 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sh8w\" (UniqueName: \"kubernetes.io/projected/54451a23-34ce-436d-8965-2cf1d4728bfe-kube-api-access-2sh8w\") pod \"service-telemetry-framework-index-2-build\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.301916 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:23 crc kubenswrapper[5129]: I1211 17:07:23.619870 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 17:07:24 crc kubenswrapper[5129]: I1211 17:07:24.496413 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"54451a23-34ce-436d-8965-2cf1d4728bfe","Type":"ContainerStarted","Data":"65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757"} Dec 11 17:07:24 crc kubenswrapper[5129]: I1211 17:07:24.496546 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"54451a23-34ce-436d-8965-2cf1d4728bfe","Type":"ContainerStarted","Data":"8b46f8c64a15da06b8db62aacc152c40b7b433f7dae427b368a27db3df890c32"} Dec 11 17:07:24 crc kubenswrapper[5129]: I1211 17:07:24.586027 5129 ???:1] "http: TLS handshake error from 192.168.126.11:57164: no serving certificate available for the kubelet" Dec 11 17:07:25 crc kubenswrapper[5129]: I1211 17:07:25.622237 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 17:07:26 crc kubenswrapper[5129]: I1211 17:07:26.513542 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="54451a23-34ce-436d-8965-2cf1d4728bfe" containerName="git-clone" containerID="cri-o://65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757" gracePeriod=30 Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.001097 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_54451a23-34ce-436d-8965-2cf1d4728bfe/git-clone/0.log" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.001421 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.102169 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-node-pullsecrets\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.102275 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.102391 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-buildcachedir\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.102416 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-ca-bundles\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.102457 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.102472 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.102497 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-build-blob-cache\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103117 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103251 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103324 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-run\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103351 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sh8w\" (UniqueName: \"kubernetes.io/projected/54451a23-34ce-436d-8965-2cf1d4728bfe-kube-api-access-2sh8w\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103370 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-buildworkdir\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103407 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-system-configs\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103433 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-root\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103460 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-push\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103476 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-proxy-ca-bundles\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103556 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-pull\") pod \"54451a23-34ce-436d-8965-2cf1d4728bfe\" (UID: \"54451a23-34ce-436d-8965-2cf1d4728bfe\") " Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103597 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103758 5129 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103773 5129 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103782 5129 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103792 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103800 5129 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54451a23-34ce-436d-8965-2cf1d4728bfe-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.103810 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.104066 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.104583 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.104795 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.116658 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.123274 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54451a23-34ce-436d-8965-2cf1d4728bfe-kube-api-access-2sh8w" (OuterVolumeSpecName: "kube-api-access-2sh8w") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "kube-api-access-2sh8w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.123289 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-push" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-push") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "builder-dockercfg-wlpf2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.123922 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-pull" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-pull") pod "54451a23-34ce-436d-8965-2cf1d4728bfe" (UID: "54451a23-34ce-436d-8965-2cf1d4728bfe"). InnerVolumeSpecName "builder-dockercfg-wlpf2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205819 5129 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205858 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205875 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-push\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205888 5129 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54451a23-34ce-436d-8965-2cf1d4728bfe-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205902 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-builder-dockercfg-wlpf2-pull\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205919 5129 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/54451a23-34ce-436d-8965-2cf1d4728bfe-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205939 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2sh8w\" (UniqueName: \"kubernetes.io/projected/54451a23-34ce-436d-8965-2cf1d4728bfe-kube-api-access-2sh8w\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.205952 5129 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/54451a23-34ce-436d-8965-2cf1d4728bfe-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.522801 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_54451a23-34ce-436d-8965-2cf1d4728bfe/git-clone/0.log" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.522865 5129 generic.go:358] "Generic (PLEG): container finished" podID="54451a23-34ce-436d-8965-2cf1d4728bfe" containerID="65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757" exitCode=1 Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.522991 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"54451a23-34ce-436d-8965-2cf1d4728bfe","Type":"ContainerDied","Data":"65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757"} Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.523041 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"54451a23-34ce-436d-8965-2cf1d4728bfe","Type":"ContainerDied","Data":"8b46f8c64a15da06b8db62aacc152c40b7b433f7dae427b368a27db3df890c32"} Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.523075 5129 scope.go:117] "RemoveContainer" containerID="65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.523047 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.547486 5129 scope.go:117] "RemoveContainer" containerID="65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757" Dec 11 17:07:27 crc kubenswrapper[5129]: E1211 17:07:27.548072 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757\": container with ID starting with 65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757 not found: ID does not exist" containerID="65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.548166 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757"} err="failed to get container status \"65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757\": rpc error: code = NotFound desc = could not find container \"65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757\": container with ID starting with 65284322eabb056a11e040a682fbe3be15f658867d3005f07d8f4b56f09d1757 not found: ID does not exist" Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.562056 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 17:07:27 crc kubenswrapper[5129]: I1211 17:07:27.567173 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 17:07:28 crc kubenswrapper[5129]: I1211 17:07:28.532846 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54451a23-34ce-436d-8965-2cf1d4728bfe" path="/var/lib/kubelet/pods/54451a23-34ce-436d-8965-2cf1d4728bfe/volumes" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.128815 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.136992 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="54451a23-34ce-436d-8965-2cf1d4728bfe" containerName="git-clone" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.137041 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="54451a23-34ce-436d-8965-2cf1d4728bfe" containerName="git-clone" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.137249 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="54451a23-34ce-436d-8965-2cf1d4728bfe" containerName="git-clone" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.556214 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.556733 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.560387 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.560487 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-global-ca\"" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.560745 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-ca\"" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.560944 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-wlpf2\"" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.560949 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-sys-config\"" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.563021 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.563079 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.563481 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564068 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564106 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564136 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564165 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m89s\" (UniqueName: \"kubernetes.io/projected/f6b67d81-437e-4cb6-921d-35ed3d237592-kube-api-access-2m89s\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564216 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564249 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564474 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564576 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564725 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.564913 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.666631 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.666885 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.666910 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.666932 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.666955 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2m89s\" (UniqueName: \"kubernetes.io/projected/f6b67d81-437e-4cb6-921d-35ed3d237592-kube-api-access-2m89s\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667109 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.666988 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667174 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667780 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667824 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667846 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667865 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667878 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667895 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.667960 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.668021 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.668130 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.668184 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.668334 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.668342 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.668541 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.669105 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.675422 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.675642 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.679097 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.688291 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m89s\" (UniqueName: \"kubernetes.io/projected/f6b67d81-437e-4cb6-921d-35ed3d237592-kube-api-access-2m89s\") pod \"service-telemetry-framework-index-3-build\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:37 crc kubenswrapper[5129]: I1211 17:07:37.887973 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:38 crc kubenswrapper[5129]: I1211 17:07:38.118996 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 17:07:38 crc kubenswrapper[5129]: I1211 17:07:38.611008 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f6b67d81-437e-4cb6-921d-35ed3d237592","Type":"ContainerStarted","Data":"8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43"} Dec 11 17:07:38 crc kubenswrapper[5129]: I1211 17:07:38.611615 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f6b67d81-437e-4cb6-921d-35ed3d237592","Type":"ContainerStarted","Data":"354fed1d7bc8d26c5537e09da863312ed08240de9ba1a88cbcf1cacf26025b00"} Dec 11 17:07:38 crc kubenswrapper[5129]: I1211 17:07:38.668191 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55472: no serving certificate available for the kubelet" Dec 11 17:07:39 crc kubenswrapper[5129]: I1211 17:07:39.701573 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 17:07:40 crc kubenswrapper[5129]: I1211 17:07:40.627358 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="f6b67d81-437e-4cb6-921d-35ed3d237592" containerName="git-clone" containerID="cri-o://8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43" gracePeriod=30 Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.546040 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_f6b67d81-437e-4cb6-921d-35ed3d237592/git-clone/0.log" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.546302 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.620987 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-run\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621078 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-push\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621145 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-root\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621221 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-buildcachedir\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621272 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621333 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-ca-bundles\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621339 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621564 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.621753 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622182 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622702 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-node-pullsecrets\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622745 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-system-configs\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622783 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622805 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-build-blob-cache\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622828 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m89s\" (UniqueName: \"kubernetes.io/projected/f6b67d81-437e-4cb6-921d-35ed3d237592-kube-api-access-2m89s\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622858 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-buildworkdir\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622881 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-pull\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.622905 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-proxy-ca-bundles\") pod \"f6b67d81-437e-4cb6-921d-35ed3d237592\" (UID: \"f6b67d81-437e-4cb6-921d-35ed3d237592\") " Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.623301 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.623436 5129 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.623453 5129 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.623461 5129 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6b67d81-437e-4cb6-921d-35ed3d237592-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.623470 5129 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.623480 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.623488 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.624053 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.624448 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.624705 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.627902 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.628227 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b67d81-437e-4cb6-921d-35ed3d237592-kube-api-access-2m89s" (OuterVolumeSpecName: "kube-api-access-2m89s") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "kube-api-access-2m89s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.628280 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-push" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-push") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "builder-dockercfg-wlpf2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.628996 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-pull" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-pull") pod "f6b67d81-437e-4cb6-921d-35ed3d237592" (UID: "f6b67d81-437e-4cb6-921d-35ed3d237592"). InnerVolumeSpecName "builder-dockercfg-wlpf2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.635334 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_f6b67d81-437e-4cb6-921d-35ed3d237592/git-clone/0.log" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.635389 5129 generic.go:358] "Generic (PLEG): container finished" podID="f6b67d81-437e-4cb6-921d-35ed3d237592" containerID="8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43" exitCode=1 Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.635490 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.635501 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f6b67d81-437e-4cb6-921d-35ed3d237592","Type":"ContainerDied","Data":"8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43"} Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.635565 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"f6b67d81-437e-4cb6-921d-35ed3d237592","Type":"ContainerDied","Data":"354fed1d7bc8d26c5537e09da863312ed08240de9ba1a88cbcf1cacf26025b00"} Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.635586 5129 scope.go:117] "RemoveContainer" containerID="8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.686835 5129 scope.go:117] "RemoveContainer" containerID="8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43" Dec 11 17:07:41 crc kubenswrapper[5129]: E1211 17:07:41.691083 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43\": container with ID starting with 8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43 not found: ID does not exist" containerID="8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.691142 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43"} err="failed to get container status \"8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43\": rpc error: code = NotFound desc = could not find container \"8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43\": container with ID starting with 8b2bcd3e735c52e7a6e52f9cdbdbcbf8bb37d3c51f205cad4a3422a4455a4f43 not found: ID does not exist" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.695439 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.699137 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.725393 5129 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.725449 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2m89s\" (UniqueName: \"kubernetes.io/projected/f6b67d81-437e-4cb6-921d-35ed3d237592-kube-api-access-2m89s\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.725482 5129 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f6b67d81-437e-4cb6-921d-35ed3d237592-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.725494 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-pull\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.725521 5129 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b67d81-437e-4cb6-921d-35ed3d237592-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.725533 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-builder-dockercfg-wlpf2-push\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:41 crc kubenswrapper[5129]: I1211 17:07:41.725545 5129 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f6b67d81-437e-4cb6-921d-35ed3d237592-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:42 crc kubenswrapper[5129]: I1211 17:07:42.527835 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b67d81-437e-4cb6-921d-35ed3d237592" path="/var/lib/kubelet/pods/f6b67d81-437e-4cb6-921d-35ed3d237592/volumes" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.237257 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.238037 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6b67d81-437e-4cb6-921d-35ed3d237592" containerName="git-clone" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.238054 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b67d81-437e-4cb6-921d-35ed3d237592" containerName="git-clone" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.238167 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6b67d81-437e-4cb6-921d-35ed3d237592" containerName="git-clone" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.248169 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.252985 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-wlpf2\"" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.253897 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-ca\"" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.254606 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-sys-config\"" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.254641 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.254949 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-global-ca\"" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.271396 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.366763 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.366871 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p89sz\" (UniqueName: \"kubernetes.io/projected/1102610c-ea81-416f-a9a5-6ca3f22db75a-kube-api-access-p89sz\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.366922 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.366953 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.366995 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367156 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367255 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367295 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367324 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367350 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367437 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367485 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.367578 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.469728 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.469806 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.469854 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.469899 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.469936 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.469974 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470332 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470476 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p89sz\" (UniqueName: \"kubernetes.io/projected/1102610c-ea81-416f-a9a5-6ca3f22db75a-kube-api-access-p89sz\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470340 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470593 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470691 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470728 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470764 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470811 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.470870 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.471003 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.471115 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.471287 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.471887 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.471982 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.471982 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.472069 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.480583 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.481312 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.484104 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.503458 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p89sz\" (UniqueName: \"kubernetes.io/projected/1102610c-ea81-416f-a9a5-6ca3f22db75a-kube-api-access-p89sz\") pod \"service-telemetry-framework-index-4-build\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.587451 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:51 crc kubenswrapper[5129]: I1211 17:07:51.802349 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 17:07:52 crc kubenswrapper[5129]: I1211 17:07:52.719467 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"1102610c-ea81-416f-a9a5-6ca3f22db75a","Type":"ContainerStarted","Data":"9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746"} Dec 11 17:07:52 crc kubenswrapper[5129]: I1211 17:07:52.721365 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"1102610c-ea81-416f-a9a5-6ca3f22db75a","Type":"ContainerStarted","Data":"e39f73856756c6306223ad63f7a77ff8518f6d40f5ced2d6aa88e64750e41833"} Dec 11 17:07:52 crc kubenswrapper[5129]: I1211 17:07:52.795550 5129 ???:1] "http: TLS handshake error from 192.168.126.11:59434: no serving certificate available for the kubelet" Dec 11 17:07:53 crc kubenswrapper[5129]: I1211 17:07:53.835150 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 17:07:54 crc kubenswrapper[5129]: I1211 17:07:54.737494 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="1102610c-ea81-416f-a9a5-6ca3f22db75a" containerName="git-clone" containerID="cri-o://9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746" gracePeriod=30 Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.184412 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_1102610c-ea81-416f-a9a5-6ca3f22db75a/git-clone/0.log" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.184794 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330576 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-system-configs\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330651 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-root\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330678 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-pull\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330717 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-push\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330761 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-blob-cache\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330816 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-ca-bundles\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330843 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p89sz\" (UniqueName: \"kubernetes.io/projected/1102610c-ea81-416f-a9a5-6ca3f22db75a-kube-api-access-p89sz\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330895 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildcachedir\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330944 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-node-pullsecrets\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.330991 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-proxy-ca-bundles\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331011 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-run\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331018 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331038 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildworkdir\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331382 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"1102610c-ea81-416f-a9a5-6ca3f22db75a\" (UID: \"1102610c-ea81-416f-a9a5-6ca3f22db75a\") " Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331624 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331621 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331723 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.332076 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.332105 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.332315 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.332340 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.332354 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.331955 5129 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.336875 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-push" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-push") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "builder-dockercfg-wlpf2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.337125 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-pull" (OuterVolumeSpecName: "builder-dockercfg-wlpf2-pull") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "builder-dockercfg-wlpf2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.337971 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.338649 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1102610c-ea81-416f-a9a5-6ca3f22db75a-kube-api-access-p89sz" (OuterVolumeSpecName: "kube-api-access-p89sz") pod "1102610c-ea81-416f-a9a5-6ca3f22db75a" (UID: "1102610c-ea81-416f-a9a5-6ca3f22db75a"). InnerVolumeSpecName "kube-api-access-p89sz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433793 5129 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433826 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433837 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-pull\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-pull\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433846 5129 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-wlpf2-push\" (UniqueName: \"kubernetes.io/secret/1102610c-ea81-416f-a9a5-6ca3f22db75a-builder-dockercfg-wlpf2-push\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433856 5129 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433864 5129 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433919 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p89sz\" (UniqueName: \"kubernetes.io/projected/1102610c-ea81-416f-a9a5-6ca3f22db75a-kube-api-access-p89sz\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433927 5129 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433934 5129 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1102610c-ea81-416f-a9a5-6ca3f22db75a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433945 5129 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1102610c-ea81-416f-a9a5-6ca3f22db75a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433954 5129 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.433964 5129 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1102610c-ea81-416f-a9a5-6ca3f22db75a-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.744743 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_1102610c-ea81-416f-a9a5-6ca3f22db75a/git-clone/0.log" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.744797 5129 generic.go:358] "Generic (PLEG): container finished" podID="1102610c-ea81-416f-a9a5-6ca3f22db75a" containerID="9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746" exitCode=1 Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.744871 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.744959 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"1102610c-ea81-416f-a9a5-6ca3f22db75a","Type":"ContainerDied","Data":"9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746"} Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.745013 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"1102610c-ea81-416f-a9a5-6ca3f22db75a","Type":"ContainerDied","Data":"e39f73856756c6306223ad63f7a77ff8518f6d40f5ced2d6aa88e64750e41833"} Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.745034 5129 scope.go:117] "RemoveContainer" containerID="9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.778442 5129 scope.go:117] "RemoveContainer" containerID="9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746" Dec 11 17:07:55 crc kubenswrapper[5129]: E1211 17:07:55.779337 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746\": container with ID starting with 9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746 not found: ID does not exist" containerID="9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.779370 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746"} err="failed to get container status \"9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746\": rpc error: code = NotFound desc = could not find container \"9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746\": container with ID starting with 9b51ceb275779daa4b43aa3300c894fe9532c466a6ea9d7ef2fd386002e12746 not found: ID does not exist" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.786922 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.794325 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.942012 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-mqml2"] Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.942740 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1102610c-ea81-416f-a9a5-6ca3f22db75a" containerName="git-clone" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.942766 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="1102610c-ea81-416f-a9a5-6ca3f22db75a" containerName="git-clone" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.942897 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="1102610c-ea81-416f-a9a5-6ca3f22db75a" containerName="git-clone" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.959184 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-mqml2" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.964814 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-pxfx9\"" Dec 11 17:07:55 crc kubenswrapper[5129]: I1211 17:07:55.976334 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-mqml2"] Dec 11 17:07:56 crc kubenswrapper[5129]: I1211 17:07:56.143254 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7hqq\" (UniqueName: \"kubernetes.io/projected/90e4f2f1-2390-4d7b-a33b-28cc0714f188-kube-api-access-r7hqq\") pod \"infrawatch-operators-mqml2\" (UID: \"90e4f2f1-2390-4d7b-a33b-28cc0714f188\") " pod="service-telemetry/infrawatch-operators-mqml2" Dec 11 17:07:56 crc kubenswrapper[5129]: I1211 17:07:56.244280 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7hqq\" (UniqueName: \"kubernetes.io/projected/90e4f2f1-2390-4d7b-a33b-28cc0714f188-kube-api-access-r7hqq\") pod \"infrawatch-operators-mqml2\" (UID: \"90e4f2f1-2390-4d7b-a33b-28cc0714f188\") " pod="service-telemetry/infrawatch-operators-mqml2" Dec 11 17:07:56 crc kubenswrapper[5129]: I1211 17:07:56.268595 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7hqq\" (UniqueName: \"kubernetes.io/projected/90e4f2f1-2390-4d7b-a33b-28cc0714f188-kube-api-access-r7hqq\") pod \"infrawatch-operators-mqml2\" (UID: \"90e4f2f1-2390-4d7b-a33b-28cc0714f188\") " pod="service-telemetry/infrawatch-operators-mqml2" Dec 11 17:07:56 crc kubenswrapper[5129]: I1211 17:07:56.297124 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-mqml2" Dec 11 17:07:56 crc kubenswrapper[5129]: I1211 17:07:56.546451 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1102610c-ea81-416f-a9a5-6ca3f22db75a" path="/var/lib/kubelet/pods/1102610c-ea81-416f-a9a5-6ca3f22db75a/volumes" Dec 11 17:07:56 crc kubenswrapper[5129]: I1211 17:07:56.717755 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-mqml2"] Dec 11 17:07:56 crc kubenswrapper[5129]: I1211 17:07:56.754021 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-mqml2" event={"ID":"90e4f2f1-2390-4d7b-a33b-28cc0714f188","Type":"ContainerStarted","Data":"53ff4d9ed483ee0874d1d88bf35f068ca53581616e3dc85b1314815536f75680"} Dec 11 17:07:56 crc kubenswrapper[5129]: E1211 17:07:56.804647 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:07:56 crc kubenswrapper[5129]: E1211 17:07:56.804915 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-mqml2_service-telemetry(90e4f2f1-2390-4d7b-a33b-28cc0714f188): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:07:56 crc kubenswrapper[5129]: E1211 17:07:56.806431 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:07:57 crc kubenswrapper[5129]: E1211 17:07:57.762145 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:08:08 crc kubenswrapper[5129]: E1211 17:08:08.588654 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:08:08 crc kubenswrapper[5129]: E1211 17:08:08.589464 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-mqml2_service-telemetry(90e4f2f1-2390-4d7b-a33b-28cc0714f188): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:08:08 crc kubenswrapper[5129]: E1211 17:08:08.590767 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:08:23 crc kubenswrapper[5129]: E1211 17:08:23.520902 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.659234 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jz25x"] Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.670298 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.694768 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jz25x"] Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.773172 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-utilities\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.773284 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-catalog-content\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.773495 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7929x\" (UniqueName: \"kubernetes.io/projected/b8d213a4-c117-4028-9528-294c5cc5d0a2-kube-api-access-7929x\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.874663 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7929x\" (UniqueName: \"kubernetes.io/projected/b8d213a4-c117-4028-9528-294c5cc5d0a2-kube-api-access-7929x\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.875040 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-utilities\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.875091 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-catalog-content\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.875570 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-utilities\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.875673 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-catalog-content\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:32 crc kubenswrapper[5129]: I1211 17:08:32.899649 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7929x\" (UniqueName: \"kubernetes.io/projected/b8d213a4-c117-4028-9528-294c5cc5d0a2-kube-api-access-7929x\") pod \"redhat-operators-jz25x\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:33 crc kubenswrapper[5129]: I1211 17:08:33.002172 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:33 crc kubenswrapper[5129]: I1211 17:08:33.247522 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jz25x"] Dec 11 17:08:34 crc kubenswrapper[5129]: I1211 17:08:34.142744 5129 generic.go:358] "Generic (PLEG): container finished" podID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerID="56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae" exitCode=0 Dec 11 17:08:34 crc kubenswrapper[5129]: I1211 17:08:34.142809 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz25x" event={"ID":"b8d213a4-c117-4028-9528-294c5cc5d0a2","Type":"ContainerDied","Data":"56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae"} Dec 11 17:08:34 crc kubenswrapper[5129]: I1211 17:08:34.143818 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz25x" event={"ID":"b8d213a4-c117-4028-9528-294c5cc5d0a2","Type":"ContainerStarted","Data":"0e3383d8bccd2d76dfdcb32b8636f43e092c2ceae3d83478d0dbc0785d195d69"} Dec 11 17:08:35 crc kubenswrapper[5129]: I1211 17:08:35.160584 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz25x" event={"ID":"b8d213a4-c117-4028-9528-294c5cc5d0a2","Type":"ContainerStarted","Data":"c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a"} Dec 11 17:08:36 crc kubenswrapper[5129]: E1211 17:08:36.627603 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:08:36 crc kubenswrapper[5129]: E1211 17:08:36.628330 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-mqml2_service-telemetry(90e4f2f1-2390-4d7b-a33b-28cc0714f188): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:08:36 crc kubenswrapper[5129]: E1211 17:08:36.630395 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:08:37 crc kubenswrapper[5129]: I1211 17:08:37.180659 5129 generic.go:358] "Generic (PLEG): container finished" podID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerID="c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a" exitCode=0 Dec 11 17:08:37 crc kubenswrapper[5129]: I1211 17:08:37.180811 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz25x" event={"ID":"b8d213a4-c117-4028-9528-294c5cc5d0a2","Type":"ContainerDied","Data":"c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a"} Dec 11 17:08:38 crc kubenswrapper[5129]: I1211 17:08:38.191310 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz25x" event={"ID":"b8d213a4-c117-4028-9528-294c5cc5d0a2","Type":"ContainerStarted","Data":"7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed"} Dec 11 17:08:38 crc kubenswrapper[5129]: I1211 17:08:38.223388 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jz25x" podStartSLOduration=5.531788867 podStartE2EDuration="6.223370139s" podCreationTimestamp="2025-12-11 17:08:32 +0000 UTC" firstStartedPulling="2025-12-11 17:08:34.143741649 +0000 UTC m=+857.947271676" lastFinishedPulling="2025-12-11 17:08:34.835322901 +0000 UTC m=+858.638852948" observedRunningTime="2025-12-11 17:08:38.220552932 +0000 UTC m=+862.024082959" watchObservedRunningTime="2025-12-11 17:08:38.223370139 +0000 UTC m=+862.026900166" Dec 11 17:08:38 crc kubenswrapper[5129]: I1211 17:08:38.947434 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:08:38 crc kubenswrapper[5129]: I1211 17:08:38.947564 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:08:43 crc kubenswrapper[5129]: I1211 17:08:43.003091 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:43 crc kubenswrapper[5129]: I1211 17:08:43.003582 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:43 crc kubenswrapper[5129]: I1211 17:08:43.941934 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-djhqn"] Dec 11 17:08:43 crc kubenswrapper[5129]: I1211 17:08:43.957585 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-djhqn"] Dec 11 17:08:43 crc kubenswrapper[5129]: I1211 17:08:43.957759 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.045444 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-catalog-content\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.045507 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwkt\" (UniqueName: \"kubernetes.io/projected/19942eee-408b-4242-b7ac-d7ce781d751b-kube-api-access-9xwkt\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.045777 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-utilities\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.062708 5129 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz25x" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="registry-server" probeResult="failure" output=< Dec 11 17:08:44 crc kubenswrapper[5129]: timeout: failed to connect service ":50051" within 1s Dec 11 17:08:44 crc kubenswrapper[5129]: > Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.149164 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-catalog-content\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.149275 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xwkt\" (UniqueName: \"kubernetes.io/projected/19942eee-408b-4242-b7ac-d7ce781d751b-kube-api-access-9xwkt\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.149431 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-utilities\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.150033 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-utilities\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.150295 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-catalog-content\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.181258 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xwkt\" (UniqueName: \"kubernetes.io/projected/19942eee-408b-4242-b7ac-d7ce781d751b-kube-api-access-9xwkt\") pod \"certified-operators-djhqn\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.292463 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:44 crc kubenswrapper[5129]: I1211 17:08:44.536804 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-djhqn"] Dec 11 17:08:45 crc kubenswrapper[5129]: I1211 17:08:45.240723 5129 generic.go:358] "Generic (PLEG): container finished" podID="19942eee-408b-4242-b7ac-d7ce781d751b" containerID="6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190" exitCode=0 Dec 11 17:08:45 crc kubenswrapper[5129]: I1211 17:08:45.240816 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhqn" event={"ID":"19942eee-408b-4242-b7ac-d7ce781d751b","Type":"ContainerDied","Data":"6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190"} Dec 11 17:08:45 crc kubenswrapper[5129]: I1211 17:08:45.241317 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhqn" event={"ID":"19942eee-408b-4242-b7ac-d7ce781d751b","Type":"ContainerStarted","Data":"0b070b6a2a3afb587e7e88b1823109aa1fec6d54a1b49e2a8d41672fa435eee6"} Dec 11 17:08:46 crc kubenswrapper[5129]: I1211 17:08:46.251769 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhqn" event={"ID":"19942eee-408b-4242-b7ac-d7ce781d751b","Type":"ContainerStarted","Data":"7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c"} Dec 11 17:08:47 crc kubenswrapper[5129]: I1211 17:08:47.259137 5129 generic.go:358] "Generic (PLEG): container finished" podID="19942eee-408b-4242-b7ac-d7ce781d751b" containerID="7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c" exitCode=0 Dec 11 17:08:47 crc kubenswrapper[5129]: I1211 17:08:47.259326 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhqn" event={"ID":"19942eee-408b-4242-b7ac-d7ce781d751b","Type":"ContainerDied","Data":"7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c"} Dec 11 17:08:48 crc kubenswrapper[5129]: I1211 17:08:48.278022 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhqn" event={"ID":"19942eee-408b-4242-b7ac-d7ce781d751b","Type":"ContainerStarted","Data":"52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0"} Dec 11 17:08:48 crc kubenswrapper[5129]: I1211 17:08:48.301650 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-djhqn" podStartSLOduration=4.629100662 podStartE2EDuration="5.301635785s" podCreationTimestamp="2025-12-11 17:08:43 +0000 UTC" firstStartedPulling="2025-12-11 17:08:45.242473685 +0000 UTC m=+869.046003742" lastFinishedPulling="2025-12-11 17:08:45.915008808 +0000 UTC m=+869.718538865" observedRunningTime="2025-12-11 17:08:48.299469578 +0000 UTC m=+872.102999615" watchObservedRunningTime="2025-12-11 17:08:48.301635785 +0000 UTC m=+872.105165802" Dec 11 17:08:51 crc kubenswrapper[5129]: E1211 17:08:51.521688 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:08:53 crc kubenswrapper[5129]: I1211 17:08:53.063025 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:53 crc kubenswrapper[5129]: I1211 17:08:53.119888 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:53 crc kubenswrapper[5129]: I1211 17:08:53.309906 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jz25x"] Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.293849 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.294228 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.324025 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jz25x" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="registry-server" containerID="cri-o://7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed" gracePeriod=2 Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.361798 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.731404 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.800144 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-catalog-content\") pod \"b8d213a4-c117-4028-9528-294c5cc5d0a2\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.800230 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-utilities\") pod \"b8d213a4-c117-4028-9528-294c5cc5d0a2\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.800362 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7929x\" (UniqueName: \"kubernetes.io/projected/b8d213a4-c117-4028-9528-294c5cc5d0a2-kube-api-access-7929x\") pod \"b8d213a4-c117-4028-9528-294c5cc5d0a2\" (UID: \"b8d213a4-c117-4028-9528-294c5cc5d0a2\") " Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.802060 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-utilities" (OuterVolumeSpecName: "utilities") pod "b8d213a4-c117-4028-9528-294c5cc5d0a2" (UID: "b8d213a4-c117-4028-9528-294c5cc5d0a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.811876 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d213a4-c117-4028-9528-294c5cc5d0a2-kube-api-access-7929x" (OuterVolumeSpecName: "kube-api-access-7929x") pod "b8d213a4-c117-4028-9528-294c5cc5d0a2" (UID: "b8d213a4-c117-4028-9528-294c5cc5d0a2"). InnerVolumeSpecName "kube-api-access-7929x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.903961 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.904010 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7929x\" (UniqueName: \"kubernetes.io/projected/b8d213a4-c117-4028-9528-294c5cc5d0a2-kube-api-access-7929x\") on node \"crc\" DevicePath \"\"" Dec 11 17:08:54 crc kubenswrapper[5129]: I1211 17:08:54.917099 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8d213a4-c117-4028-9528-294c5cc5d0a2" (UID: "b8d213a4-c117-4028-9528-294c5cc5d0a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.005479 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8d213a4-c117-4028-9528-294c5cc5d0a2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.334394 5129 generic.go:358] "Generic (PLEG): container finished" podID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerID="7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed" exitCode=0 Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.334716 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz25x" event={"ID":"b8d213a4-c117-4028-9528-294c5cc5d0a2","Type":"ContainerDied","Data":"7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed"} Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.335057 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz25x" event={"ID":"b8d213a4-c117-4028-9528-294c5cc5d0a2","Type":"ContainerDied","Data":"0e3383d8bccd2d76dfdcb32b8636f43e092c2ceae3d83478d0dbc0785d195d69"} Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.335088 5129 scope.go:117] "RemoveContainer" containerID="7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.334867 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz25x" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.362287 5129 scope.go:117] "RemoveContainer" containerID="c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.386145 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jz25x"] Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.394968 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jz25x"] Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.409408 5129 scope.go:117] "RemoveContainer" containerID="56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.409712 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.426405 5129 scope.go:117] "RemoveContainer" containerID="7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed" Dec 11 17:08:55 crc kubenswrapper[5129]: E1211 17:08:55.426810 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed\": container with ID starting with 7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed not found: ID does not exist" containerID="7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.426842 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed"} err="failed to get container status \"7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed\": rpc error: code = NotFound desc = could not find container \"7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed\": container with ID starting with 7577230b411b6b376d79e8996a2eed76bf2d135ea01935c87842dc101144f4ed not found: ID does not exist" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.426867 5129 scope.go:117] "RemoveContainer" containerID="c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a" Dec 11 17:08:55 crc kubenswrapper[5129]: E1211 17:08:55.427230 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a\": container with ID starting with c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a not found: ID does not exist" containerID="c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.427276 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a"} err="failed to get container status \"c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a\": rpc error: code = NotFound desc = could not find container \"c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a\": container with ID starting with c9ac51630f13a7487197baf88dcef29c846f57c39a6e2ef337cddc104154c96a not found: ID does not exist" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.427308 5129 scope.go:117] "RemoveContainer" containerID="56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae" Dec 11 17:08:55 crc kubenswrapper[5129]: E1211 17:08:55.427560 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae\": container with ID starting with 56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae not found: ID does not exist" containerID="56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae" Dec 11 17:08:55 crc kubenswrapper[5129]: I1211 17:08:55.427584 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae"} err="failed to get container status \"56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae\": rpc error: code = NotFound desc = could not find container \"56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae\": container with ID starting with 56af6fd5863df85431770190e811fa59bbd2f919d2e0fe3f4cf1758a2c922bae not found: ID does not exist" Dec 11 17:08:56 crc kubenswrapper[5129]: I1211 17:08:56.504190 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-djhqn"] Dec 11 17:08:56 crc kubenswrapper[5129]: I1211 17:08:56.530576 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" path="/var/lib/kubelet/pods/b8d213a4-c117-4028-9528-294c5cc5d0a2/volumes" Dec 11 17:08:57 crc kubenswrapper[5129]: I1211 17:08:57.356121 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-djhqn" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="registry-server" containerID="cri-o://52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0" gracePeriod=2 Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.281352 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.353074 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-utilities\") pod \"19942eee-408b-4242-b7ac-d7ce781d751b\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.353186 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-catalog-content\") pod \"19942eee-408b-4242-b7ac-d7ce781d751b\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.353230 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xwkt\" (UniqueName: \"kubernetes.io/projected/19942eee-408b-4242-b7ac-d7ce781d751b-kube-api-access-9xwkt\") pod \"19942eee-408b-4242-b7ac-d7ce781d751b\" (UID: \"19942eee-408b-4242-b7ac-d7ce781d751b\") " Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.354999 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-utilities" (OuterVolumeSpecName: "utilities") pod "19942eee-408b-4242-b7ac-d7ce781d751b" (UID: "19942eee-408b-4242-b7ac-d7ce781d751b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.367016 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19942eee-408b-4242-b7ac-d7ce781d751b-kube-api-access-9xwkt" (OuterVolumeSpecName: "kube-api-access-9xwkt") pod "19942eee-408b-4242-b7ac-d7ce781d751b" (UID: "19942eee-408b-4242-b7ac-d7ce781d751b"). InnerVolumeSpecName "kube-api-access-9xwkt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.369156 5129 generic.go:358] "Generic (PLEG): container finished" podID="19942eee-408b-4242-b7ac-d7ce781d751b" containerID="52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0" exitCode=0 Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.369968 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-djhqn" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.369968 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhqn" event={"ID":"19942eee-408b-4242-b7ac-d7ce781d751b","Type":"ContainerDied","Data":"52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0"} Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.371457 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-djhqn" event={"ID":"19942eee-408b-4242-b7ac-d7ce781d751b","Type":"ContainerDied","Data":"0b070b6a2a3afb587e7e88b1823109aa1fec6d54a1b49e2a8d41672fa435eee6"} Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.371479 5129 scope.go:117] "RemoveContainer" containerID="52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.395136 5129 scope.go:117] "RemoveContainer" containerID="7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.397212 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19942eee-408b-4242-b7ac-d7ce781d751b" (UID: "19942eee-408b-4242-b7ac-d7ce781d751b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.409792 5129 scope.go:117] "RemoveContainer" containerID="6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.428072 5129 scope.go:117] "RemoveContainer" containerID="52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0" Dec 11 17:08:58 crc kubenswrapper[5129]: E1211 17:08:58.428398 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0\": container with ID starting with 52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0 not found: ID does not exist" containerID="52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.428428 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0"} err="failed to get container status \"52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0\": rpc error: code = NotFound desc = could not find container \"52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0\": container with ID starting with 52e3ed5097fed7483e7caec563fa0403d727401777db9543fc19ce324771cda0 not found: ID does not exist" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.428447 5129 scope.go:117] "RemoveContainer" containerID="7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c" Dec 11 17:08:58 crc kubenswrapper[5129]: E1211 17:08:58.428655 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c\": container with ID starting with 7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c not found: ID does not exist" containerID="7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.428676 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c"} err="failed to get container status \"7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c\": rpc error: code = NotFound desc = could not find container \"7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c\": container with ID starting with 7aa2c3242e96d716a343bd860b83f545feba90a04287965effb16800c20b576c not found: ID does not exist" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.428687 5129 scope.go:117] "RemoveContainer" containerID="6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190" Dec 11 17:08:58 crc kubenswrapper[5129]: E1211 17:08:58.428961 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190\": container with ID starting with 6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190 not found: ID does not exist" containerID="6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.429002 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190"} err="failed to get container status \"6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190\": rpc error: code = NotFound desc = could not find container \"6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190\": container with ID starting with 6b9e44168f7e4dcaffe9c40e2ed60a856c9283235920dcfe97158567db9c7190 not found: ID does not exist" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.454796 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.454824 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19942eee-408b-4242-b7ac-d7ce781d751b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.454836 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xwkt\" (UniqueName: \"kubernetes.io/projected/19942eee-408b-4242-b7ac-d7ce781d751b-kube-api-access-9xwkt\") on node \"crc\" DevicePath \"\"" Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.698244 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-djhqn"] Dec 11 17:08:58 crc kubenswrapper[5129]: I1211 17:08:58.705744 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-djhqn"] Dec 11 17:09:00 crc kubenswrapper[5129]: I1211 17:09:00.531307 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" path="/var/lib/kubelet/pods/19942eee-408b-4242-b7ac-d7ce781d751b/volumes" Dec 11 17:09:04 crc kubenswrapper[5129]: E1211 17:09:04.521741 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:09:08 crc kubenswrapper[5129]: I1211 17:09:08.947254 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:09:08 crc kubenswrapper[5129]: I1211 17:09:08.947779 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:09:16 crc kubenswrapper[5129]: I1211 17:09:16.844410 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:09:16 crc kubenswrapper[5129]: I1211 17:09:16.851886 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:09:16 crc kubenswrapper[5129]: I1211 17:09:16.856179 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:09:16 crc kubenswrapper[5129]: I1211 17:09:16.862786 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:09:19 crc kubenswrapper[5129]: E1211 17:09:19.571158 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:09:19 crc kubenswrapper[5129]: E1211 17:09:19.571571 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-mqml2_service-telemetry(90e4f2f1-2390-4d7b-a33b-28cc0714f188): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:09:19 crc kubenswrapper[5129]: E1211 17:09:19.572733 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.887281 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjh6h"] Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.888874 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="extract-utilities" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.888889 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="extract-utilities" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.888900 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="registry-server" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.888979 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="registry-server" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889051 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="extract-utilities" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889057 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="extract-utilities" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889069 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="registry-server" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889074 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="registry-server" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889086 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="extract-content" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889093 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="extract-content" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889099 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="extract-content" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889105 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="extract-content" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889193 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="19942eee-408b-4242-b7ac-d7ce781d751b" containerName="registry-server" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.889210 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8d213a4-c117-4028-9528-294c5cc5d0a2" containerName="registry-server" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.900001 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.911485 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjh6h"] Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.987601 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-catalog-content\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.987642 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-utilities\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:26 crc kubenswrapper[5129]: I1211 17:09:26.987965 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppfsm\" (UniqueName: \"kubernetes.io/projected/c613cddf-093a-4cc2-b6de-9bff84da401c-kube-api-access-ppfsm\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.088838 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ppfsm\" (UniqueName: \"kubernetes.io/projected/c613cddf-093a-4cc2-b6de-9bff84da401c-kube-api-access-ppfsm\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.088899 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-catalog-content\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.088956 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-utilities\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.089492 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-utilities\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.089652 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-catalog-content\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.114461 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppfsm\" (UniqueName: \"kubernetes.io/projected/c613cddf-093a-4cc2-b6de-9bff84da401c-kube-api-access-ppfsm\") pod \"community-operators-gjh6h\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.227634 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.475813 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjh6h"] Dec 11 17:09:27 crc kubenswrapper[5129]: W1211 17:09:27.479344 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc613cddf_093a_4cc2_b6de_9bff84da401c.slice/crio-11acf1535656a5d220e989d5ffdea2875e7312a845262f3d9eae3c77ce4ea68c WatchSource:0}: Error finding container 11acf1535656a5d220e989d5ffdea2875e7312a845262f3d9eae3c77ce4ea68c: Status 404 returned error can't find the container with id 11acf1535656a5d220e989d5ffdea2875e7312a845262f3d9eae3c77ce4ea68c Dec 11 17:09:27 crc kubenswrapper[5129]: I1211 17:09:27.588915 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjh6h" event={"ID":"c613cddf-093a-4cc2-b6de-9bff84da401c","Type":"ContainerStarted","Data":"11acf1535656a5d220e989d5ffdea2875e7312a845262f3d9eae3c77ce4ea68c"} Dec 11 17:09:28 crc kubenswrapper[5129]: I1211 17:09:28.601662 5129 generic.go:358] "Generic (PLEG): container finished" podID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerID="134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4" exitCode=0 Dec 11 17:09:28 crc kubenswrapper[5129]: I1211 17:09:28.601734 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjh6h" event={"ID":"c613cddf-093a-4cc2-b6de-9bff84da401c","Type":"ContainerDied","Data":"134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4"} Dec 11 17:09:29 crc kubenswrapper[5129]: I1211 17:09:29.610424 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjh6h" event={"ID":"c613cddf-093a-4cc2-b6de-9bff84da401c","Type":"ContainerStarted","Data":"0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873"} Dec 11 17:09:30 crc kubenswrapper[5129]: I1211 17:09:30.627441 5129 generic.go:358] "Generic (PLEG): container finished" podID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerID="0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873" exitCode=0 Dec 11 17:09:30 crc kubenswrapper[5129]: I1211 17:09:30.627475 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjh6h" event={"ID":"c613cddf-093a-4cc2-b6de-9bff84da401c","Type":"ContainerDied","Data":"0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873"} Dec 11 17:09:31 crc kubenswrapper[5129]: I1211 17:09:31.649047 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjh6h" event={"ID":"c613cddf-093a-4cc2-b6de-9bff84da401c","Type":"ContainerStarted","Data":"993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db"} Dec 11 17:09:31 crc kubenswrapper[5129]: I1211 17:09:31.681314 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjh6h" podStartSLOduration=4.9912821019999996 podStartE2EDuration="5.681284814s" podCreationTimestamp="2025-12-11 17:09:26 +0000 UTC" firstStartedPulling="2025-12-11 17:09:28.603041963 +0000 UTC m=+912.406572010" lastFinishedPulling="2025-12-11 17:09:29.293044695 +0000 UTC m=+913.096574722" observedRunningTime="2025-12-11 17:09:31.669440056 +0000 UTC m=+915.472970083" watchObservedRunningTime="2025-12-11 17:09:31.681284814 +0000 UTC m=+915.484814871" Dec 11 17:09:33 crc kubenswrapper[5129]: E1211 17:09:33.520835 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:09:37 crc kubenswrapper[5129]: I1211 17:09:37.228142 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:37 crc kubenswrapper[5129]: I1211 17:09:37.228205 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:37 crc kubenswrapper[5129]: I1211 17:09:37.282459 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:37 crc kubenswrapper[5129]: I1211 17:09:37.741318 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:37 crc kubenswrapper[5129]: I1211 17:09:37.794498 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjh6h"] Dec 11 17:09:38 crc kubenswrapper[5129]: I1211 17:09:38.947623 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:09:38 crc kubenswrapper[5129]: I1211 17:09:38.947728 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:09:38 crc kubenswrapper[5129]: I1211 17:09:38.947796 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 17:09:38 crc kubenswrapper[5129]: I1211 17:09:38.948767 5129 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72b323fbfa03c76e16a553147e53e05b8f4d9018a8b65ccba3bfb2ee0d9e02ed"} pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 17:09:38 crc kubenswrapper[5129]: I1211 17:09:38.948885 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" containerID="cri-o://72b323fbfa03c76e16a553147e53e05b8f4d9018a8b65ccba3bfb2ee0d9e02ed" gracePeriod=600 Dec 11 17:09:39 crc kubenswrapper[5129]: I1211 17:09:39.733806 5129 generic.go:358] "Generic (PLEG): container finished" podID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerID="72b323fbfa03c76e16a553147e53e05b8f4d9018a8b65ccba3bfb2ee0d9e02ed" exitCode=0 Dec 11 17:09:39 crc kubenswrapper[5129]: I1211 17:09:39.734001 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerDied","Data":"72b323fbfa03c76e16a553147e53e05b8f4d9018a8b65ccba3bfb2ee0d9e02ed"} Dec 11 17:09:39 crc kubenswrapper[5129]: I1211 17:09:39.734231 5129 scope.go:117] "RemoveContainer" containerID="8a11ce0f7bc15e595347b96471f3f4b914409e097a5439477166064a982bf74b" Dec 11 17:09:39 crc kubenswrapper[5129]: I1211 17:09:39.734459 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjh6h" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="registry-server" containerID="cri-o://993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db" gracePeriod=2 Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.134846 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.265381 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-catalog-content\") pod \"c613cddf-093a-4cc2-b6de-9bff84da401c\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.265773 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-utilities\") pod \"c613cddf-093a-4cc2-b6de-9bff84da401c\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.265932 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppfsm\" (UniqueName: \"kubernetes.io/projected/c613cddf-093a-4cc2-b6de-9bff84da401c-kube-api-access-ppfsm\") pod \"c613cddf-093a-4cc2-b6de-9bff84da401c\" (UID: \"c613cddf-093a-4cc2-b6de-9bff84da401c\") " Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.266791 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-utilities" (OuterVolumeSpecName: "utilities") pod "c613cddf-093a-4cc2-b6de-9bff84da401c" (UID: "c613cddf-093a-4cc2-b6de-9bff84da401c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.271051 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c613cddf-093a-4cc2-b6de-9bff84da401c-kube-api-access-ppfsm" (OuterVolumeSpecName: "kube-api-access-ppfsm") pod "c613cddf-093a-4cc2-b6de-9bff84da401c" (UID: "c613cddf-093a-4cc2-b6de-9bff84da401c"). InnerVolumeSpecName "kube-api-access-ppfsm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.314828 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c613cddf-093a-4cc2-b6de-9bff84da401c" (UID: "c613cddf-093a-4cc2-b6de-9bff84da401c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.367803 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.367843 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c613cddf-093a-4cc2-b6de-9bff84da401c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.367860 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ppfsm\" (UniqueName: \"kubernetes.io/projected/c613cddf-093a-4cc2-b6de-9bff84da401c-kube-api-access-ppfsm\") on node \"crc\" DevicePath \"\"" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.743834 5129 generic.go:358] "Generic (PLEG): container finished" podID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerID="993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db" exitCode=0 Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.743943 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjh6h" event={"ID":"c613cddf-093a-4cc2-b6de-9bff84da401c","Type":"ContainerDied","Data":"993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db"} Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.744006 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjh6h" event={"ID":"c613cddf-093a-4cc2-b6de-9bff84da401c","Type":"ContainerDied","Data":"11acf1535656a5d220e989d5ffdea2875e7312a845262f3d9eae3c77ce4ea68c"} Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.744004 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjh6h" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.744034 5129 scope.go:117] "RemoveContainer" containerID="993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.747705 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"0cb32a03ffea69f560c4ccb49e2319b8060c83c8e33ec5ad314be8da4ad86b74"} Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.779666 5129 scope.go:117] "RemoveContainer" containerID="0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.800982 5129 scope.go:117] "RemoveContainer" containerID="134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.816805 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjh6h"] Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.820943 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjh6h"] Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.821502 5129 scope.go:117] "RemoveContainer" containerID="993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db" Dec 11 17:09:40 crc kubenswrapper[5129]: E1211 17:09:40.822888 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db\": container with ID starting with 993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db not found: ID does not exist" containerID="993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.822936 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db"} err="failed to get container status \"993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db\": rpc error: code = NotFound desc = could not find container \"993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db\": container with ID starting with 993283ae8d7161b5b400ca1b7c2db409fe6069b552a66834686a3ffcd19682db not found: ID does not exist" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.822959 5129 scope.go:117] "RemoveContainer" containerID="0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873" Dec 11 17:09:40 crc kubenswrapper[5129]: E1211 17:09:40.823239 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873\": container with ID starting with 0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873 not found: ID does not exist" containerID="0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.823262 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873"} err="failed to get container status \"0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873\": rpc error: code = NotFound desc = could not find container \"0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873\": container with ID starting with 0aef5beb714a40917296ff6e5ccb62122302855954a3f82ea56e2dcaba46a873 not found: ID does not exist" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.823279 5129 scope.go:117] "RemoveContainer" containerID="134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4" Dec 11 17:09:40 crc kubenswrapper[5129]: E1211 17:09:40.823645 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4\": container with ID starting with 134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4 not found: ID does not exist" containerID="134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4" Dec 11 17:09:40 crc kubenswrapper[5129]: I1211 17:09:40.823673 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4"} err="failed to get container status \"134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4\": rpc error: code = NotFound desc = could not find container \"134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4\": container with ID starting with 134e2fee768427d153ebaadc716374cc3f45faf7136947ceb68025e06d07a2b4 not found: ID does not exist" Dec 11 17:09:42 crc kubenswrapper[5129]: I1211 17:09:42.529164 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" path="/var/lib/kubelet/pods/c613cddf-093a-4cc2-b6de-9bff84da401c/volumes" Dec 11 17:09:47 crc kubenswrapper[5129]: E1211 17:09:47.521180 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:09:59 crc kubenswrapper[5129]: E1211 17:09:59.521353 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:10:11 crc kubenswrapper[5129]: E1211 17:10:11.521362 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:10:25 crc kubenswrapper[5129]: I1211 17:10:25.521061 5129 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 17:10:25 crc kubenswrapper[5129]: E1211 17:10:25.522852 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:10:36 crc kubenswrapper[5129]: E1211 17:10:36.526649 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:10:50 crc kubenswrapper[5129]: E1211 17:10:50.587892 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:10:50 crc kubenswrapper[5129]: E1211 17:10:50.588886 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-mqml2_service-telemetry(90e4f2f1-2390-4d7b-a33b-28cc0714f188): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:10:50 crc kubenswrapper[5129]: E1211 17:10:50.590057 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:10:59 crc kubenswrapper[5129]: E1211 17:10:59.554731 5129 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.585258 5129 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.621999 5129 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.648718 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55362: no serving certificate available for the kubelet" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.678149 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55366: no serving certificate available for the kubelet" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.712369 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55380: no serving certificate available for the kubelet" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.756968 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55392: no serving certificate available for the kubelet" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.823289 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55408: no serving certificate available for the kubelet" Dec 11 17:11:01 crc kubenswrapper[5129]: I1211 17:11:01.932825 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55424: no serving certificate available for the kubelet" Dec 11 17:11:02 crc kubenswrapper[5129]: I1211 17:11:02.119727 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55438: no serving certificate available for the kubelet" Dec 11 17:11:02 crc kubenswrapper[5129]: I1211 17:11:02.466419 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55448: no serving certificate available for the kubelet" Dec 11 17:11:03 crc kubenswrapper[5129]: I1211 17:11:03.139041 5129 ???:1] "http: TLS handshake error from 192.168.126.11:55460: no serving certificate available for the kubelet" Dec 11 17:11:04 crc kubenswrapper[5129]: I1211 17:11:04.448271 5129 ???:1] "http: TLS handshake error from 192.168.126.11:36254: no serving certificate available for the kubelet" Dec 11 17:11:04 crc kubenswrapper[5129]: E1211 17:11:04.521719 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:11:07 crc kubenswrapper[5129]: I1211 17:11:07.033708 5129 ???:1] "http: TLS handshake error from 192.168.126.11:36260: no serving certificate available for the kubelet" Dec 11 17:11:12 crc kubenswrapper[5129]: I1211 17:11:12.190265 5129 ???:1] "http: TLS handshake error from 192.168.126.11:36274: no serving certificate available for the kubelet" Dec 11 17:11:19 crc kubenswrapper[5129]: E1211 17:11:19.521763 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:11:22 crc kubenswrapper[5129]: I1211 17:11:22.461174 5129 ???:1] "http: TLS handshake error from 192.168.126.11:48026: no serving certificate available for the kubelet" Dec 11 17:11:34 crc kubenswrapper[5129]: E1211 17:11:34.521090 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:11:42 crc kubenswrapper[5129]: I1211 17:11:42.970528 5129 ???:1] "http: TLS handshake error from 192.168.126.11:41930: no serving certificate available for the kubelet" Dec 11 17:11:47 crc kubenswrapper[5129]: E1211 17:11:47.521102 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:11:58 crc kubenswrapper[5129]: E1211 17:11:58.520792 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:12:08 crc kubenswrapper[5129]: I1211 17:12:08.946411 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:12:08 crc kubenswrapper[5129]: I1211 17:12:08.947044 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:12:12 crc kubenswrapper[5129]: E1211 17:12:12.521474 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:12:23 crc kubenswrapper[5129]: I1211 17:12:23.959713 5129 ???:1] "http: TLS handshake error from 192.168.126.11:40992: no serving certificate available for the kubelet" Dec 11 17:12:24 crc kubenswrapper[5129]: E1211 17:12:24.521275 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:12:38 crc kubenswrapper[5129]: E1211 17:12:38.522175 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:12:38 crc kubenswrapper[5129]: I1211 17:12:38.948145 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:12:38 crc kubenswrapper[5129]: I1211 17:12:38.948363 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:12:53 crc kubenswrapper[5129]: E1211 17:12:53.521426 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.025770 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-czsmf"] Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.027252 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="extract-utilities" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.027276 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="extract-utilities" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.027319 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="registry-server" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.027331 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="registry-server" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.027365 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="extract-content" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.027377 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="extract-content" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.027601 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="c613cddf-093a-4cc2-b6de-9bff84da401c" containerName="registry-server" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.042268 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-czsmf"] Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.042482 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-czsmf" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.183571 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p78l\" (UniqueName: \"kubernetes.io/projected/2b343d47-5ac2-4494-be36-d38785b71e3c-kube-api-access-6p78l\") pod \"infrawatch-operators-czsmf\" (UID: \"2b343d47-5ac2-4494-be36-d38785b71e3c\") " pod="service-telemetry/infrawatch-operators-czsmf" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.285240 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6p78l\" (UniqueName: \"kubernetes.io/projected/2b343d47-5ac2-4494-be36-d38785b71e3c-kube-api-access-6p78l\") pod \"infrawatch-operators-czsmf\" (UID: \"2b343d47-5ac2-4494-be36-d38785b71e3c\") " pod="service-telemetry/infrawatch-operators-czsmf" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.316047 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p78l\" (UniqueName: \"kubernetes.io/projected/2b343d47-5ac2-4494-be36-d38785b71e3c-kube-api-access-6p78l\") pod \"infrawatch-operators-czsmf\" (UID: \"2b343d47-5ac2-4494-be36-d38785b71e3c\") " pod="service-telemetry/infrawatch-operators-czsmf" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.374500 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-czsmf" Dec 11 17:12:59 crc kubenswrapper[5129]: I1211 17:12:59.632124 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-czsmf"] Dec 11 17:12:59 crc kubenswrapper[5129]: E1211 17:12:59.694177 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:12:59 crc kubenswrapper[5129]: E1211 17:12:59.694548 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6p78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-czsmf_service-telemetry(2b343d47-5ac2-4494-be36-d38785b71e3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:12:59 crc kubenswrapper[5129]: E1211 17:12:59.695875 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:13:00 crc kubenswrapper[5129]: I1211 17:13:00.255272 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-czsmf" event={"ID":"2b343d47-5ac2-4494-be36-d38785b71e3c","Type":"ContainerStarted","Data":"fb39c8da881e851702aa06578d9bd9bdbbe117f1719ffaa1d698aaf2f401159b"} Dec 11 17:13:00 crc kubenswrapper[5129]: E1211 17:13:00.256449 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:13:01 crc kubenswrapper[5129]: E1211 17:13:01.266547 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:13:07 crc kubenswrapper[5129]: E1211 17:13:07.521645 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:13:08 crc kubenswrapper[5129]: I1211 17:13:08.947275 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:13:08 crc kubenswrapper[5129]: I1211 17:13:08.947361 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:13:08 crc kubenswrapper[5129]: I1211 17:13:08.947444 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 17:13:08 crc kubenswrapper[5129]: I1211 17:13:08.948337 5129 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cb32a03ffea69f560c4ccb49e2319b8060c83c8e33ec5ad314be8da4ad86b74"} pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 17:13:08 crc kubenswrapper[5129]: I1211 17:13:08.948503 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" containerID="cri-o://0cb32a03ffea69f560c4ccb49e2319b8060c83c8e33ec5ad314be8da4ad86b74" gracePeriod=600 Dec 11 17:13:09 crc kubenswrapper[5129]: I1211 17:13:09.333400 5129 generic.go:358] "Generic (PLEG): container finished" podID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerID="0cb32a03ffea69f560c4ccb49e2319b8060c83c8e33ec5ad314be8da4ad86b74" exitCode=0 Dec 11 17:13:09 crc kubenswrapper[5129]: I1211 17:13:09.333548 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerDied","Data":"0cb32a03ffea69f560c4ccb49e2319b8060c83c8e33ec5ad314be8da4ad86b74"} Dec 11 17:13:09 crc kubenswrapper[5129]: I1211 17:13:09.334075 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"db27e3a7027268dad4d07e93fd4d51a93131fc2145b917dd01f992b53826c7ac"} Dec 11 17:13:09 crc kubenswrapper[5129]: I1211 17:13:09.334109 5129 scope.go:117] "RemoveContainer" containerID="72b323fbfa03c76e16a553147e53e05b8f4d9018a8b65ccba3bfb2ee0d9e02ed" Dec 11 17:13:16 crc kubenswrapper[5129]: E1211 17:13:16.588256 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:13:16 crc kubenswrapper[5129]: E1211 17:13:16.589120 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6p78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-czsmf_service-telemetry(2b343d47-5ac2-4494-be36-d38785b71e3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:13:16 crc kubenswrapper[5129]: E1211 17:13:16.590544 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:13:19 crc kubenswrapper[5129]: E1211 17:13:19.521252 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:13:30 crc kubenswrapper[5129]: E1211 17:13:30.529409 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:13:34 crc kubenswrapper[5129]: E1211 17:13:34.614780 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:13:34 crc kubenswrapper[5129]: E1211 17:13:34.616066 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-mqml2_service-telemetry(90e4f2f1-2390-4d7b-a33b-28cc0714f188): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:13:34 crc kubenswrapper[5129]: E1211 17:13:34.617454 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:13:45 crc kubenswrapper[5129]: E1211 17:13:45.606468 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:13:45 crc kubenswrapper[5129]: E1211 17:13:45.607392 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6p78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-czsmf_service-telemetry(2b343d47-5ac2-4494-be36-d38785b71e3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:13:45 crc kubenswrapper[5129]: E1211 17:13:45.608641 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:13:45 crc kubenswrapper[5129]: I1211 17:13:45.916235 5129 ???:1] "http: TLS handshake error from 192.168.126.11:54150: no serving certificate available for the kubelet" Dec 11 17:13:46 crc kubenswrapper[5129]: E1211 17:13:46.533400 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:13:56 crc kubenswrapper[5129]: E1211 17:13:56.535288 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:14:01 crc kubenswrapper[5129]: E1211 17:14:01.522783 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:14:10 crc kubenswrapper[5129]: E1211 17:14:10.522505 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:14:13 crc kubenswrapper[5129]: E1211 17:14:13.527142 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:14:16 crc kubenswrapper[5129]: I1211 17:14:16.914981 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:14:16 crc kubenswrapper[5129]: I1211 17:14:16.927371 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:14:16 crc kubenswrapper[5129]: I1211 17:14:16.934541 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:14:16 crc kubenswrapper[5129]: I1211 17:14:16.942060 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:14:21 crc kubenswrapper[5129]: E1211 17:14:21.522297 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:14:25 crc kubenswrapper[5129]: E1211 17:14:25.521725 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:14:34 crc kubenswrapper[5129]: E1211 17:14:34.617468 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:14:34 crc kubenswrapper[5129]: E1211 17:14:34.620458 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6p78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-czsmf_service-telemetry(2b343d47-5ac2-4494-be36-d38785b71e3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:14:34 crc kubenswrapper[5129]: E1211 17:14:34.621955 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:14:36 crc kubenswrapper[5129]: E1211 17:14:36.530425 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:14:49 crc kubenswrapper[5129]: E1211 17:14:49.521241 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:14:51 crc kubenswrapper[5129]: E1211 17:14:51.520943 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.176776 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5"] Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.187287 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5"] Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.187401 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.190309 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.190596 5129 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.241188 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9330cdc4-30f1-4954-af89-5185812af337-secret-volume\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.241231 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbvm\" (UniqueName: \"kubernetes.io/projected/9330cdc4-30f1-4954-af89-5185812af337-kube-api-access-rpbvm\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.241728 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9330cdc4-30f1-4954-af89-5185812af337-config-volume\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.343856 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9330cdc4-30f1-4954-af89-5185812af337-secret-volume\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.343936 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rpbvm\" (UniqueName: \"kubernetes.io/projected/9330cdc4-30f1-4954-af89-5185812af337-kube-api-access-rpbvm\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.344203 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9330cdc4-30f1-4954-af89-5185812af337-config-volume\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.345784 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9330cdc4-30f1-4954-af89-5185812af337-config-volume\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.353908 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9330cdc4-30f1-4954-af89-5185812af337-secret-volume\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.366274 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpbvm\" (UniqueName: \"kubernetes.io/projected/9330cdc4-30f1-4954-af89-5185812af337-kube-api-access-rpbvm\") pod \"collect-profiles-29424555-p6mv5\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.504354 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:00 crc kubenswrapper[5129]: I1211 17:15:00.712120 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5"] Dec 11 17:15:00 crc kubenswrapper[5129]: W1211 17:15:00.718626 5129 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9330cdc4_30f1_4954_af89_5185812af337.slice/crio-c2c8a992f08c80123e0c6c1cc82c167282deb85af4b71f5ab03ea9a174756a37 WatchSource:0}: Error finding container c2c8a992f08c80123e0c6c1cc82c167282deb85af4b71f5ab03ea9a174756a37: Status 404 returned error can't find the container with id c2c8a992f08c80123e0c6c1cc82c167282deb85af4b71f5ab03ea9a174756a37 Dec 11 17:15:01 crc kubenswrapper[5129]: I1211 17:15:01.269785 5129 generic.go:358] "Generic (PLEG): container finished" podID="9330cdc4-30f1-4954-af89-5185812af337" containerID="6349a840259697177fd70256f038f708730d7154ae7e7ca748ea6dafe5f9c3d1" exitCode=0 Dec 11 17:15:01 crc kubenswrapper[5129]: I1211 17:15:01.269871 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" event={"ID":"9330cdc4-30f1-4954-af89-5185812af337","Type":"ContainerDied","Data":"6349a840259697177fd70256f038f708730d7154ae7e7ca748ea6dafe5f9c3d1"} Dec 11 17:15:01 crc kubenswrapper[5129]: I1211 17:15:01.270254 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" event={"ID":"9330cdc4-30f1-4954-af89-5185812af337","Type":"ContainerStarted","Data":"c2c8a992f08c80123e0c6c1cc82c167282deb85af4b71f5ab03ea9a174756a37"} Dec 11 17:15:02 crc kubenswrapper[5129]: E1211 17:15:02.520872 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.635350 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.780217 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpbvm\" (UniqueName: \"kubernetes.io/projected/9330cdc4-30f1-4954-af89-5185812af337-kube-api-access-rpbvm\") pod \"9330cdc4-30f1-4954-af89-5185812af337\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.780283 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9330cdc4-30f1-4954-af89-5185812af337-secret-volume\") pod \"9330cdc4-30f1-4954-af89-5185812af337\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.780334 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9330cdc4-30f1-4954-af89-5185812af337-config-volume\") pod \"9330cdc4-30f1-4954-af89-5185812af337\" (UID: \"9330cdc4-30f1-4954-af89-5185812af337\") " Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.781501 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9330cdc4-30f1-4954-af89-5185812af337-config-volume" (OuterVolumeSpecName: "config-volume") pod "9330cdc4-30f1-4954-af89-5185812af337" (UID: "9330cdc4-30f1-4954-af89-5185812af337"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.786858 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9330cdc4-30f1-4954-af89-5185812af337-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9330cdc4-30f1-4954-af89-5185812af337" (UID: "9330cdc4-30f1-4954-af89-5185812af337"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.787742 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9330cdc4-30f1-4954-af89-5185812af337-kube-api-access-rpbvm" (OuterVolumeSpecName: "kube-api-access-rpbvm") pod "9330cdc4-30f1-4954-af89-5185812af337" (UID: "9330cdc4-30f1-4954-af89-5185812af337"). InnerVolumeSpecName "kube-api-access-rpbvm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.882970 5129 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9330cdc4-30f1-4954-af89-5185812af337-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.883041 5129 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9330cdc4-30f1-4954-af89-5185812af337-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 17:15:02 crc kubenswrapper[5129]: I1211 17:15:02.883060 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpbvm\" (UniqueName: \"kubernetes.io/projected/9330cdc4-30f1-4954-af89-5185812af337-kube-api-access-rpbvm\") on node \"crc\" DevicePath \"\"" Dec 11 17:15:03 crc kubenswrapper[5129]: I1211 17:15:03.289415 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" Dec 11 17:15:03 crc kubenswrapper[5129]: I1211 17:15:03.289431 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424555-p6mv5" event={"ID":"9330cdc4-30f1-4954-af89-5185812af337","Type":"ContainerDied","Data":"c2c8a992f08c80123e0c6c1cc82c167282deb85af4b71f5ab03ea9a174756a37"} Dec 11 17:15:03 crc kubenswrapper[5129]: I1211 17:15:03.289589 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c8a992f08c80123e0c6c1cc82c167282deb85af4b71f5ab03ea9a174756a37" Dec 11 17:15:06 crc kubenswrapper[5129]: E1211 17:15:06.532873 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:15:16 crc kubenswrapper[5129]: E1211 17:15:16.528706 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:15:19 crc kubenswrapper[5129]: E1211 17:15:19.521181 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:15:22 crc kubenswrapper[5129]: I1211 17:15:22.197409 5129 ???:1] "http: TLS handshake error from 192.168.126.11:38964: no serving certificate available for the kubelet" Dec 11 17:15:27 crc kubenswrapper[5129]: I1211 17:15:27.520641 5129 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 17:15:27 crc kubenswrapper[5129]: E1211 17:15:27.521129 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:15:31 crc kubenswrapper[5129]: E1211 17:15:31.521242 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:15:38 crc kubenswrapper[5129]: E1211 17:15:38.532390 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:15:38 crc kubenswrapper[5129]: I1211 17:15:38.946854 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:15:38 crc kubenswrapper[5129]: I1211 17:15:38.946970 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:15:43 crc kubenswrapper[5129]: E1211 17:15:43.520548 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:15:52 crc kubenswrapper[5129]: E1211 17:15:52.521869 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:15:57 crc kubenswrapper[5129]: E1211 17:15:57.521704 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:16:07 crc kubenswrapper[5129]: E1211 17:16:07.592101 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:16:07 crc kubenswrapper[5129]: E1211 17:16:07.592983 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6p78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-czsmf_service-telemetry(2b343d47-5ac2-4494-be36-d38785b71e3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:16:07 crc kubenswrapper[5129]: E1211 17:16:07.594616 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:16:08 crc kubenswrapper[5129]: E1211 17:16:08.521782 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:16:08 crc kubenswrapper[5129]: I1211 17:16:08.946550 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:16:08 crc kubenswrapper[5129]: I1211 17:16:08.946659 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:16:20 crc kubenswrapper[5129]: E1211 17:16:20.521443 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:16:23 crc kubenswrapper[5129]: E1211 17:16:23.521068 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:16:29 crc kubenswrapper[5129]: I1211 17:16:29.792828 5129 ???:1] "http: TLS handshake error from 192.168.126.11:36128: no serving certificate available for the kubelet" Dec 11 17:16:34 crc kubenswrapper[5129]: E1211 17:16:34.521709 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:16:35 crc kubenswrapper[5129]: E1211 17:16:35.521712 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:16:38 crc kubenswrapper[5129]: I1211 17:16:38.946592 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:16:38 crc kubenswrapper[5129]: I1211 17:16:38.947177 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:16:38 crc kubenswrapper[5129]: I1211 17:16:38.947262 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 17:16:38 crc kubenswrapper[5129]: I1211 17:16:38.948109 5129 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db27e3a7027268dad4d07e93fd4d51a93131fc2145b917dd01f992b53826c7ac"} pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 17:16:38 crc kubenswrapper[5129]: I1211 17:16:38.948195 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" containerID="cri-o://db27e3a7027268dad4d07e93fd4d51a93131fc2145b917dd01f992b53826c7ac" gracePeriod=600 Dec 11 17:16:40 crc kubenswrapper[5129]: I1211 17:16:40.071971 5129 generic.go:358] "Generic (PLEG): container finished" podID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerID="db27e3a7027268dad4d07e93fd4d51a93131fc2145b917dd01f992b53826c7ac" exitCode=0 Dec 11 17:16:40 crc kubenswrapper[5129]: I1211 17:16:40.073138 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerDied","Data":"db27e3a7027268dad4d07e93fd4d51a93131fc2145b917dd01f992b53826c7ac"} Dec 11 17:16:40 crc kubenswrapper[5129]: I1211 17:16:40.073192 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"391518dc14e957db0b99c0a16953e2a3465deb35e263b141eee1a5696019e05b"} Dec 11 17:16:40 crc kubenswrapper[5129]: I1211 17:16:40.073228 5129 scope.go:117] "RemoveContainer" containerID="0cb32a03ffea69f560c4ccb49e2319b8060c83c8e33ec5ad314be8da4ad86b74" Dec 11 17:16:47 crc kubenswrapper[5129]: E1211 17:16:47.520900 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:16:48 crc kubenswrapper[5129]: E1211 17:16:48.531755 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:16:59 crc kubenswrapper[5129]: E1211 17:16:59.522395 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:17:02 crc kubenswrapper[5129]: E1211 17:17:02.521785 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:17:11 crc kubenswrapper[5129]: E1211 17:17:11.520634 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:17:13 crc kubenswrapper[5129]: E1211 17:17:13.521159 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:17:24 crc kubenswrapper[5129]: E1211 17:17:24.521018 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:17:25 crc kubenswrapper[5129]: E1211 17:17:25.521625 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:17:38 crc kubenswrapper[5129]: E1211 17:17:38.522211 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:17:39 crc kubenswrapper[5129]: E1211 17:17:39.521491 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:17:53 crc kubenswrapper[5129]: E1211 17:17:53.521419 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:17:53 crc kubenswrapper[5129]: E1211 17:17:53.521698 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:18:06 crc kubenswrapper[5129]: E1211 17:18:06.528032 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:18:08 crc kubenswrapper[5129]: E1211 17:18:08.522183 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:18:19 crc kubenswrapper[5129]: E1211 17:18:19.525008 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:18:21 crc kubenswrapper[5129]: E1211 17:18:21.521631 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:18:31 crc kubenswrapper[5129]: E1211 17:18:31.522704 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:18:35 crc kubenswrapper[5129]: E1211 17:18:35.607773 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:18:35 crc kubenswrapper[5129]: E1211 17:18:35.608037 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7hqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-mqml2_service-telemetry(90e4f2f1-2390-4d7b-a33b-28cc0714f188): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:18:35 crc kubenswrapper[5129]: E1211 17:18:35.609340 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:18:44 crc kubenswrapper[5129]: E1211 17:18:44.520831 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:18:48 crc kubenswrapper[5129]: E1211 17:18:48.522985 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:18:57 crc kubenswrapper[5129]: E1211 17:18:57.603846 5129 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 17:18:57 crc kubenswrapper[5129]: E1211 17:18:57.604741 5129 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6p78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-czsmf_service-telemetry(2b343d47-5ac2-4494-be36-d38785b71e3c): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 17:18:57 crc kubenswrapper[5129]: E1211 17:18:57.606102 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.372994 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-68q8g/must-gather-5ntlt"] Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.374011 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9330cdc4-30f1-4954-af89-5185812af337" containerName="collect-profiles" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.374035 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="9330cdc4-30f1-4954-af89-5185812af337" containerName="collect-profiles" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.374195 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="9330cdc4-30f1-4954-af89-5185812af337" containerName="collect-profiles" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.386487 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.390645 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-68q8g\"/\"openshift-service-ca.crt\"" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.392595 5129 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-68q8g\"/\"kube-root-ca.crt\"" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.398004 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-68q8g/must-gather-5ntlt"] Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.537160 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7ps\" (UniqueName: \"kubernetes.io/projected/3486d756-b2a2-474b-90b8-c521e601778f-kube-api-access-sl7ps\") pod \"must-gather-5ntlt\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.537418 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3486d756-b2a2-474b-90b8-c521e601778f-must-gather-output\") pod \"must-gather-5ntlt\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.638761 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7ps\" (UniqueName: \"kubernetes.io/projected/3486d756-b2a2-474b-90b8-c521e601778f-kube-api-access-sl7ps\") pod \"must-gather-5ntlt\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.638810 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3486d756-b2a2-474b-90b8-c521e601778f-must-gather-output\") pod \"must-gather-5ntlt\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.639277 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3486d756-b2a2-474b-90b8-c521e601778f-must-gather-output\") pod \"must-gather-5ntlt\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.655313 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7ps\" (UniqueName: \"kubernetes.io/projected/3486d756-b2a2-474b-90b8-c521e601778f-kube-api-access-sl7ps\") pod \"must-gather-5ntlt\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.704643 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:18:59 crc kubenswrapper[5129]: I1211 17:18:59.886501 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-68q8g/must-gather-5ntlt"] Dec 11 17:19:00 crc kubenswrapper[5129]: I1211 17:19:00.303832 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-68q8g/must-gather-5ntlt" event={"ID":"3486d756-b2a2-474b-90b8-c521e601778f","Type":"ContainerStarted","Data":"8e6f99b74e0eea3151adf936c3c1f1f27dab678fbd2d314cfb1b837b4e5de598"} Dec 11 17:19:01 crc kubenswrapper[5129]: E1211 17:19:01.521823 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:19:06 crc kubenswrapper[5129]: I1211 17:19:06.344590 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-68q8g/must-gather-5ntlt" event={"ID":"3486d756-b2a2-474b-90b8-c521e601778f","Type":"ContainerStarted","Data":"997f77b0f0b599b1750e3d9f0b94d8e9274078ee84d6d8282707882542c9c3c1"} Dec 11 17:19:06 crc kubenswrapper[5129]: I1211 17:19:06.345401 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-68q8g/must-gather-5ntlt" event={"ID":"3486d756-b2a2-474b-90b8-c521e601778f","Type":"ContainerStarted","Data":"f15f5468baf7357533444ea4138e235cebb30a0c5ad856f8b36c588fefb80fbd"} Dec 11 17:19:06 crc kubenswrapper[5129]: I1211 17:19:06.368031 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-68q8g/must-gather-5ntlt" podStartSLOduration=1.89161855 podStartE2EDuration="7.368005928s" podCreationTimestamp="2025-12-11 17:18:59 +0000 UTC" firstStartedPulling="2025-12-11 17:18:59.904987014 +0000 UTC m=+1483.708517031" lastFinishedPulling="2025-12-11 17:19:05.381374342 +0000 UTC m=+1489.184904409" observedRunningTime="2025-12-11 17:19:06.366279385 +0000 UTC m=+1490.169809432" watchObservedRunningTime="2025-12-11 17:19:06.368005928 +0000 UTC m=+1490.171535975" Dec 11 17:19:08 crc kubenswrapper[5129]: I1211 17:19:08.947081 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:19:08 crc kubenswrapper[5129]: I1211 17:19:08.947484 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:19:09 crc kubenswrapper[5129]: I1211 17:19:09.386754 5129 ???:1] "http: TLS handshake error from 192.168.126.11:45476: no serving certificate available for the kubelet" Dec 11 17:19:10 crc kubenswrapper[5129]: E1211 17:19:10.521953 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:19:12 crc kubenswrapper[5129]: E1211 17:19:12.521065 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:19:17 crc kubenswrapper[5129]: I1211 17:19:17.002035 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:19:17 crc kubenswrapper[5129]: I1211 17:19:17.021552 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:19:17 crc kubenswrapper[5129]: I1211 17:19:17.025345 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m95zr_5313889a-2681-4f68-96f8-d5dfea8d3a8b/kube-multus/0.log" Dec 11 17:19:17 crc kubenswrapper[5129]: I1211 17:19:17.033643 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 17:19:23 crc kubenswrapper[5129]: E1211 17:19:23.521628 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:19:25 crc kubenswrapper[5129]: E1211 17:19:25.521428 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:19:36 crc kubenswrapper[5129]: E1211 17:19:36.525458 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:19:36 crc kubenswrapper[5129]: E1211 17:19:36.526264 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:19:38 crc kubenswrapper[5129]: I1211 17:19:38.946475 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:19:38 crc kubenswrapper[5129]: I1211 17:19:38.946878 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.240233 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lnclz"] Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.269938 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.285100 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lnclz"] Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.379186 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-catalog-content\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.379329 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-utilities\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.379358 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmvw\" (UniqueName: \"kubernetes.io/projected/57177c62-6136-4dd1-a47e-33fd28365cf9-kube-api-access-qpmvw\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.481135 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-utilities\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.481184 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmvw\" (UniqueName: \"kubernetes.io/projected/57177c62-6136-4dd1-a47e-33fd28365cf9-kube-api-access-qpmvw\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.481248 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-catalog-content\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.482129 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-utilities\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.482189 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-catalog-content\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.505489 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmvw\" (UniqueName: \"kubernetes.io/projected/57177c62-6136-4dd1-a47e-33fd28365cf9-kube-api-access-qpmvw\") pod \"certified-operators-lnclz\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:46 crc kubenswrapper[5129]: I1211 17:19:46.642322 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:47 crc kubenswrapper[5129]: I1211 17:19:47.137611 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lnclz"] Dec 11 17:19:47 crc kubenswrapper[5129]: I1211 17:19:47.721821 5129 generic.go:358] "Generic (PLEG): container finished" podID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerID="887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23" exitCode=0 Dec 11 17:19:47 crc kubenswrapper[5129]: I1211 17:19:47.721911 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnclz" event={"ID":"57177c62-6136-4dd1-a47e-33fd28365cf9","Type":"ContainerDied","Data":"887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23"} Dec 11 17:19:47 crc kubenswrapper[5129]: I1211 17:19:47.723128 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnclz" event={"ID":"57177c62-6136-4dd1-a47e-33fd28365cf9","Type":"ContainerStarted","Data":"350bdc1aa459a1a1a925dc71d640bf728a7d3bbf1395c5f1fe90489094267383"} Dec 11 17:19:48 crc kubenswrapper[5129]: I1211 17:19:48.972355 5129 ???:1] "http: TLS handshake error from 192.168.126.11:56398: no serving certificate available for the kubelet" Dec 11 17:19:49 crc kubenswrapper[5129]: I1211 17:19:49.114972 5129 ???:1] "http: TLS handshake error from 192.168.126.11:56410: no serving certificate available for the kubelet" Dec 11 17:19:49 crc kubenswrapper[5129]: I1211 17:19:49.148276 5129 ???:1] "http: TLS handshake error from 192.168.126.11:56420: no serving certificate available for the kubelet" Dec 11 17:19:49 crc kubenswrapper[5129]: I1211 17:19:49.737174 5129 generic.go:358] "Generic (PLEG): container finished" podID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerID="4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79" exitCode=0 Dec 11 17:19:49 crc kubenswrapper[5129]: I1211 17:19:49.737325 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnclz" event={"ID":"57177c62-6136-4dd1-a47e-33fd28365cf9","Type":"ContainerDied","Data":"4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79"} Dec 11 17:19:50 crc kubenswrapper[5129]: E1211 17:19:50.521439 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:19:50 crc kubenswrapper[5129]: I1211 17:19:50.745934 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnclz" event={"ID":"57177c62-6136-4dd1-a47e-33fd28365cf9","Type":"ContainerStarted","Data":"2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396"} Dec 11 17:19:50 crc kubenswrapper[5129]: I1211 17:19:50.780232 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lnclz" podStartSLOduration=3.7812347969999998 podStartE2EDuration="4.780205753s" podCreationTimestamp="2025-12-11 17:19:46 +0000 UTC" firstStartedPulling="2025-12-11 17:19:47.723163241 +0000 UTC m=+1531.526693288" lastFinishedPulling="2025-12-11 17:19:48.722134227 +0000 UTC m=+1532.525664244" observedRunningTime="2025-12-11 17:19:50.771293612 +0000 UTC m=+1534.574823629" watchObservedRunningTime="2025-12-11 17:19:50.780205753 +0000 UTC m=+1534.583735810" Dec 11 17:19:51 crc kubenswrapper[5129]: E1211 17:19:51.521848 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:19:56 crc kubenswrapper[5129]: I1211 17:19:56.644055 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:56 crc kubenswrapper[5129]: I1211 17:19:56.646442 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:56 crc kubenswrapper[5129]: I1211 17:19:56.690199 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:56 crc kubenswrapper[5129]: I1211 17:19:56.851742 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:56 crc kubenswrapper[5129]: I1211 17:19:56.924562 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lnclz"] Dec 11 17:19:58 crc kubenswrapper[5129]: I1211 17:19:58.803694 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lnclz" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="registry-server" containerID="cri-o://2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396" gracePeriod=2 Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.706589 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.798412 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-catalog-content\") pod \"57177c62-6136-4dd1-a47e-33fd28365cf9\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.798486 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-utilities\") pod \"57177c62-6136-4dd1-a47e-33fd28365cf9\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.798547 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpmvw\" (UniqueName: \"kubernetes.io/projected/57177c62-6136-4dd1-a47e-33fd28365cf9-kube-api-access-qpmvw\") pod \"57177c62-6136-4dd1-a47e-33fd28365cf9\" (UID: \"57177c62-6136-4dd1-a47e-33fd28365cf9\") " Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.799645 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-utilities" (OuterVolumeSpecName: "utilities") pod "57177c62-6136-4dd1-a47e-33fd28365cf9" (UID: "57177c62-6136-4dd1-a47e-33fd28365cf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.807347 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57177c62-6136-4dd1-a47e-33fd28365cf9-kube-api-access-qpmvw" (OuterVolumeSpecName: "kube-api-access-qpmvw") pod "57177c62-6136-4dd1-a47e-33fd28365cf9" (UID: "57177c62-6136-4dd1-a47e-33fd28365cf9"). InnerVolumeSpecName "kube-api-access-qpmvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.810816 5129 generic.go:358] "Generic (PLEG): container finished" podID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerID="2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396" exitCode=0 Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.810949 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnclz" event={"ID":"57177c62-6136-4dd1-a47e-33fd28365cf9","Type":"ContainerDied","Data":"2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396"} Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.810993 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lnclz" event={"ID":"57177c62-6136-4dd1-a47e-33fd28365cf9","Type":"ContainerDied","Data":"350bdc1aa459a1a1a925dc71d640bf728a7d3bbf1395c5f1fe90489094267383"} Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.811022 5129 scope.go:117] "RemoveContainer" containerID="2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.811257 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lnclz" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.827883 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57177c62-6136-4dd1-a47e-33fd28365cf9" (UID: "57177c62-6136-4dd1-a47e-33fd28365cf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.836743 5129 scope.go:117] "RemoveContainer" containerID="4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.855665 5129 scope.go:117] "RemoveContainer" containerID="887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.883228 5129 scope.go:117] "RemoveContainer" containerID="2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396" Dec 11 17:19:59 crc kubenswrapper[5129]: E1211 17:19:59.883782 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396\": container with ID starting with 2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396 not found: ID does not exist" containerID="2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.883833 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396"} err="failed to get container status \"2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396\": rpc error: code = NotFound desc = could not find container \"2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396\": container with ID starting with 2d6d9fae1dc998e824434780e7b5bc376f9faf31bb31df6f609b2a9a86a3a396 not found: ID does not exist" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.883865 5129 scope.go:117] "RemoveContainer" containerID="4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79" Dec 11 17:19:59 crc kubenswrapper[5129]: E1211 17:19:59.884169 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79\": container with ID starting with 4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79 not found: ID does not exist" containerID="4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.884211 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79"} err="failed to get container status \"4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79\": rpc error: code = NotFound desc = could not find container \"4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79\": container with ID starting with 4c6a6aa415a022fe34439aba855677cdd0802bef5207e0c4da247157f2981e79 not found: ID does not exist" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.884230 5129 scope.go:117] "RemoveContainer" containerID="887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23" Dec 11 17:19:59 crc kubenswrapper[5129]: E1211 17:19:59.884690 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23\": container with ID starting with 887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23 not found: ID does not exist" containerID="887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.884725 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23"} err="failed to get container status \"887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23\": rpc error: code = NotFound desc = could not find container \"887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23\": container with ID starting with 887ac3ae4c249735724d6d4a5f5e61c422d82a219360d6f9be03ef7dfe406b23 not found: ID does not exist" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.899753 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.899789 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57177c62-6136-4dd1-a47e-33fd28365cf9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 17:19:59 crc kubenswrapper[5129]: I1211 17:19:59.899801 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qpmvw\" (UniqueName: \"kubernetes.io/projected/57177c62-6136-4dd1-a47e-33fd28365cf9-kube-api-access-qpmvw\") on node \"crc\" DevicePath \"\"" Dec 11 17:20:00 crc kubenswrapper[5129]: I1211 17:20:00.154527 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lnclz"] Dec 11 17:20:00 crc kubenswrapper[5129]: I1211 17:20:00.166606 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lnclz"] Dec 11 17:20:00 crc kubenswrapper[5129]: I1211 17:20:00.529454 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" path="/var/lib/kubelet/pods/57177c62-6136-4dd1-a47e-33fd28365cf9/volumes" Dec 11 17:20:01 crc kubenswrapper[5129]: E1211 17:20:01.520422 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:20:01 crc kubenswrapper[5129]: I1211 17:20:01.543984 5129 ???:1] "http: TLS handshake error from 192.168.126.11:52610: no serving certificate available for the kubelet" Dec 11 17:20:01 crc kubenswrapper[5129]: I1211 17:20:01.592308 5129 ???:1] "http: TLS handshake error from 192.168.126.11:52614: no serving certificate available for the kubelet" Dec 11 17:20:01 crc kubenswrapper[5129]: I1211 17:20:01.664297 5129 ???:1] "http: TLS handshake error from 192.168.126.11:52626: no serving certificate available for the kubelet" Dec 11 17:20:04 crc kubenswrapper[5129]: E1211 17:20:04.521708 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.357970 5129 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-28vp8"] Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.358974 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="extract-utilities" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.358986 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="extract-utilities" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.359013 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="registry-server" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.359021 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="registry-server" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.359036 5129 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="extract-content" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.359041 5129 state_mem.go:107] "Deleted CPUSet assignment" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="extract-content" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.359147 5129 memory_manager.go:356] "RemoveStaleState removing state" podUID="57177c62-6136-4dd1-a47e-33fd28365cf9" containerName="registry-server" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.377063 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28vp8"] Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.377207 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.501578 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pmz\" (UniqueName: \"kubernetes.io/projected/b72fa7e3-33c5-4b51-8c10-844d38f879db-kube-api-access-98pmz\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.501626 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-utilities\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.501652 5129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-catalog-content\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.603120 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-catalog-content\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.603263 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-98pmz\" (UniqueName: \"kubernetes.io/projected/b72fa7e3-33c5-4b51-8c10-844d38f879db-kube-api-access-98pmz\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.603308 5129 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-utilities\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.603672 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-catalog-content\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.603819 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-utilities\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.626744 5129 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-98pmz\" (UniqueName: \"kubernetes.io/projected/b72fa7e3-33c5-4b51-8c10-844d38f879db-kube-api-access-98pmz\") pod \"community-operators-28vp8\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:07 crc kubenswrapper[5129]: I1211 17:20:07.691705 5129 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.181015 5129 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28vp8"] Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.889928 5129 generic.go:358] "Generic (PLEG): container finished" podID="b72fa7e3-33c5-4b51-8c10-844d38f879db" containerID="5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19" exitCode=0 Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.889980 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28vp8" event={"ID":"b72fa7e3-33c5-4b51-8c10-844d38f879db","Type":"ContainerDied","Data":"5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19"} Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.890626 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28vp8" event={"ID":"b72fa7e3-33c5-4b51-8c10-844d38f879db","Type":"ContainerStarted","Data":"9c8c20b0f9b538538694432244f2fc20e334ef73fff0ea8d593467864a43a4e7"} Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.946654 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.946711 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.946749 5129 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.947298 5129 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"391518dc14e957db0b99c0a16953e2a3465deb35e263b141eee1a5696019e05b"} pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 17:20:08 crc kubenswrapper[5129]: I1211 17:20:08.947348 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" containerID="cri-o://391518dc14e957db0b99c0a16953e2a3465deb35e263b141eee1a5696019e05b" gracePeriod=600 Dec 11 17:20:09 crc kubenswrapper[5129]: I1211 17:20:09.898753 5129 generic.go:358] "Generic (PLEG): container finished" podID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerID="391518dc14e957db0b99c0a16953e2a3465deb35e263b141eee1a5696019e05b" exitCode=0 Dec 11 17:20:09 crc kubenswrapper[5129]: I1211 17:20:09.898823 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerDied","Data":"391518dc14e957db0b99c0a16953e2a3465deb35e263b141eee1a5696019e05b"} Dec 11 17:20:09 crc kubenswrapper[5129]: I1211 17:20:09.899349 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" event={"ID":"b9f3b447-4c51-44f3-9ade-21b54c3a6daf","Type":"ContainerStarted","Data":"9ab77204f036681b6de319a20544a5a510d7f4ea0390fb110d8e44dcfa399d2f"} Dec 11 17:20:09 crc kubenswrapper[5129]: I1211 17:20:09.899375 5129 scope.go:117] "RemoveContainer" containerID="db27e3a7027268dad4d07e93fd4d51a93131fc2145b917dd01f992b53826c7ac" Dec 11 17:20:10 crc kubenswrapper[5129]: I1211 17:20:10.909099 5129 generic.go:358] "Generic (PLEG): container finished" podID="b72fa7e3-33c5-4b51-8c10-844d38f879db" containerID="7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2" exitCode=0 Dec 11 17:20:10 crc kubenswrapper[5129]: I1211 17:20:10.909627 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28vp8" event={"ID":"b72fa7e3-33c5-4b51-8c10-844d38f879db","Type":"ContainerDied","Data":"7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2"} Dec 11 17:20:11 crc kubenswrapper[5129]: I1211 17:20:11.921902 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28vp8" event={"ID":"b72fa7e3-33c5-4b51-8c10-844d38f879db","Type":"ContainerStarted","Data":"3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15"} Dec 11 17:20:11 crc kubenswrapper[5129]: I1211 17:20:11.945687 5129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-28vp8" podStartSLOduration=4.048428364 podStartE2EDuration="4.945665767s" podCreationTimestamp="2025-12-11 17:20:07 +0000 UTC" firstStartedPulling="2025-12-11 17:20:08.891094863 +0000 UTC m=+1552.694624890" lastFinishedPulling="2025-12-11 17:20:09.788332276 +0000 UTC m=+1553.591862293" observedRunningTime="2025-12-11 17:20:11.944435198 +0000 UTC m=+1555.747965225" watchObservedRunningTime="2025-12-11 17:20:11.945665767 +0000 UTC m=+1555.749195804" Dec 11 17:20:12 crc kubenswrapper[5129]: E1211 17:20:12.520362 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:20:17 crc kubenswrapper[5129]: E1211 17:20:17.520613 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:20:17 crc kubenswrapper[5129]: I1211 17:20:17.677571 5129 ???:1] "http: TLS handshake error from 192.168.126.11:41962: no serving certificate available for the kubelet" Dec 11 17:20:17 crc kubenswrapper[5129]: I1211 17:20:17.691843 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:17 crc kubenswrapper[5129]: I1211 17:20:17.692687 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:17 crc kubenswrapper[5129]: I1211 17:20:17.733065 5129 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:17 crc kubenswrapper[5129]: I1211 17:20:17.809702 5129 ???:1] "http: TLS handshake error from 192.168.126.11:41968: no serving certificate available for the kubelet" Dec 11 17:20:17 crc kubenswrapper[5129]: I1211 17:20:17.811621 5129 ???:1] "http: TLS handshake error from 192.168.126.11:41976: no serving certificate available for the kubelet" Dec 11 17:20:17 crc kubenswrapper[5129]: I1211 17:20:17.834525 5129 ???:1] "http: TLS handshake error from 192.168.126.11:41982: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.017418 5129 ???:1] "http: TLS handshake error from 192.168.126.11:41986: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.021971 5129 ???:1] "http: TLS handshake error from 192.168.126.11:41990: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.033406 5129 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.045862 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42004: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.071956 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28vp8"] Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.218838 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42016: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.363961 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42022: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.371743 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42030: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.385630 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42032: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.554312 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42046: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.564597 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42060: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.596910 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42062: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.748789 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42078: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.873440 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42080: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.909578 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42088: no serving certificate available for the kubelet" Dec 11 17:20:18 crc kubenswrapper[5129]: I1211 17:20:18.910974 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42100: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.070154 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42110: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.091463 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42124: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.107244 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42130: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.246048 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42144: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.393191 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42160: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.414065 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42168: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.424047 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42180: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.573054 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42196: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.578080 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42198: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.602874 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42202: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.708590 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42210: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.881820 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42224: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.899720 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42238: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.918485 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42250: no serving certificate available for the kubelet" Dec 11 17:20:19 crc kubenswrapper[5129]: I1211 17:20:19.974619 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-28vp8" podUID="b72fa7e3-33c5-4b51-8c10-844d38f879db" containerName="registry-server" containerID="cri-o://3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15" gracePeriod=2 Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.061186 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42252: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.087221 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42266: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.095253 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42274: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.142307 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42280: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.344690 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42292: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.352724 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.354822 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42302: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.362920 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42310: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.479500 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-catalog-content\") pod \"b72fa7e3-33c5-4b51-8c10-844d38f879db\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.479674 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-utilities\") pod \"b72fa7e3-33c5-4b51-8c10-844d38f879db\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.479721 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98pmz\" (UniqueName: \"kubernetes.io/projected/b72fa7e3-33c5-4b51-8c10-844d38f879db-kube-api-access-98pmz\") pod \"b72fa7e3-33c5-4b51-8c10-844d38f879db\" (UID: \"b72fa7e3-33c5-4b51-8c10-844d38f879db\") " Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.480599 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-utilities" (OuterVolumeSpecName: "utilities") pod "b72fa7e3-33c5-4b51-8c10-844d38f879db" (UID: "b72fa7e3-33c5-4b51-8c10-844d38f879db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.487169 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42314: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.487705 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72fa7e3-33c5-4b51-8c10-844d38f879db-kube-api-access-98pmz" (OuterVolumeSpecName: "kube-api-access-98pmz") pod "b72fa7e3-33c5-4b51-8c10-844d38f879db" (UID: "b72fa7e3-33c5-4b51-8c10-844d38f879db"). InnerVolumeSpecName "kube-api-access-98pmz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.529396 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42326: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.534135 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42342: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.540192 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b72fa7e3-33c5-4b51-8c10-844d38f879db" (UID: "b72fa7e3-33c5-4b51-8c10-844d38f879db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.552879 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42358: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.583424 5129 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.583467 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98pmz\" (UniqueName: \"kubernetes.io/projected/b72fa7e3-33c5-4b51-8c10-844d38f879db-kube-api-access-98pmz\") on node \"crc\" DevicePath \"\"" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.583481 5129 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72fa7e3-33c5-4b51-8c10-844d38f879db-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.681830 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42366: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.812615 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42368: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.832473 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42372: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.838878 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42380: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.974471 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42388: no serving certificate available for the kubelet" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.982323 5129 generic.go:358] "Generic (PLEG): container finished" podID="b72fa7e3-33c5-4b51-8c10-844d38f879db" containerID="3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15" exitCode=0 Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.982362 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28vp8" event={"ID":"b72fa7e3-33c5-4b51-8c10-844d38f879db","Type":"ContainerDied","Data":"3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15"} Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.982425 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28vp8" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.982447 5129 scope.go:117] "RemoveContainer" containerID="3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15" Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.982433 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28vp8" event={"ID":"b72fa7e3-33c5-4b51-8c10-844d38f879db","Type":"ContainerDied","Data":"9c8c20b0f9b538538694432244f2fc20e334ef73fff0ea8d593467864a43a4e7"} Dec 11 17:20:20 crc kubenswrapper[5129]: I1211 17:20:20.996167 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42402: no serving certificate available for the kubelet" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.001768 5129 scope.go:117] "RemoveContainer" containerID="7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.013797 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28vp8"] Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.026006 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-28vp8"] Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.027547 5129 ???:1] "http: TLS handshake error from 192.168.126.11:42408: no serving certificate available for the kubelet" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.038522 5129 scope.go:117] "RemoveContainer" containerID="5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.054730 5129 scope.go:117] "RemoveContainer" containerID="3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15" Dec 11 17:20:21 crc kubenswrapper[5129]: E1211 17:20:21.055407 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15\": container with ID starting with 3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15 not found: ID does not exist" containerID="3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.055491 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15"} err="failed to get container status \"3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15\": rpc error: code = NotFound desc = could not find container \"3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15\": container with ID starting with 3d3bc417ad59a1160afc4c41884257526846a57aee5f9b1931b78e3793ebbe15 not found: ID does not exist" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.055574 5129 scope.go:117] "RemoveContainer" containerID="7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2" Dec 11 17:20:21 crc kubenswrapper[5129]: E1211 17:20:21.056124 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2\": container with ID starting with 7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2 not found: ID does not exist" containerID="7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.056151 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2"} err="failed to get container status \"7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2\": rpc error: code = NotFound desc = could not find container \"7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2\": container with ID starting with 7a31c954966f783a6e9dd5449dff0d0bc92e16d54cf2d3eb7334ebc835f2c1c2 not found: ID does not exist" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.056171 5129 scope.go:117] "RemoveContainer" containerID="5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19" Dec 11 17:20:21 crc kubenswrapper[5129]: E1211 17:20:21.057635 5129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19\": container with ID starting with 5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19 not found: ID does not exist" containerID="5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19" Dec 11 17:20:21 crc kubenswrapper[5129]: I1211 17:20:21.057662 5129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19"} err="failed to get container status \"5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19\": rpc error: code = NotFound desc = could not find container \"5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19\": container with ID starting with 5ee44ee30fa29b232b30ac2ddd35ebbb9b6859ddbabac9021cedff08806b0c19 not found: ID does not exist" Dec 11 17:20:22 crc kubenswrapper[5129]: I1211 17:20:22.529140 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72fa7e3-33c5-4b51-8c10-844d38f879db" path="/var/lib/kubelet/pods/b72fa7e3-33c5-4b51-8c10-844d38f879db/volumes" Dec 11 17:20:26 crc kubenswrapper[5129]: E1211 17:20:26.534005 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:20:28 crc kubenswrapper[5129]: I1211 17:20:28.521126 5129 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 17:20:28 crc kubenswrapper[5129]: E1211 17:20:28.521852 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:20:33 crc kubenswrapper[5129]: I1211 17:20:33.333319 5129 ???:1] "http: TLS handshake error from 192.168.126.11:59726: no serving certificate available for the kubelet" Dec 11 17:20:33 crc kubenswrapper[5129]: I1211 17:20:33.472925 5129 ???:1] "http: TLS handshake error from 192.168.126.11:59732: no serving certificate available for the kubelet" Dec 11 17:20:33 crc kubenswrapper[5129]: I1211 17:20:33.521068 5129 ???:1] "http: TLS handshake error from 192.168.126.11:59744: no serving certificate available for the kubelet" Dec 11 17:20:33 crc kubenswrapper[5129]: I1211 17:20:33.630068 5129 ???:1] "http: TLS handshake error from 192.168.126.11:44402: no serving certificate available for the kubelet" Dec 11 17:20:33 crc kubenswrapper[5129]: I1211 17:20:33.682891 5129 ???:1] "http: TLS handshake error from 192.168.126.11:44404: no serving certificate available for the kubelet" Dec 11 17:20:41 crc kubenswrapper[5129]: E1211 17:20:41.521893 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:20:42 crc kubenswrapper[5129]: E1211 17:20:42.525559 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:20:53 crc kubenswrapper[5129]: E1211 17:20:53.520802 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:20:54 crc kubenswrapper[5129]: E1211 17:20:54.536183 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:21:08 crc kubenswrapper[5129]: I1211 17:21:08.364736 5129 generic.go:358] "Generic (PLEG): container finished" podID="3486d756-b2a2-474b-90b8-c521e601778f" containerID="f15f5468baf7357533444ea4138e235cebb30a0c5ad856f8b36c588fefb80fbd" exitCode=0 Dec 11 17:21:08 crc kubenswrapper[5129]: I1211 17:21:08.364793 5129 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-68q8g/must-gather-5ntlt" event={"ID":"3486d756-b2a2-474b-90b8-c521e601778f","Type":"ContainerDied","Data":"f15f5468baf7357533444ea4138e235cebb30a0c5ad856f8b36c588fefb80fbd"} Dec 11 17:21:08 crc kubenswrapper[5129]: I1211 17:21:08.366348 5129 scope.go:117] "RemoveContainer" containerID="f15f5468baf7357533444ea4138e235cebb30a0c5ad856f8b36c588fefb80fbd" Dec 11 17:21:08 crc kubenswrapper[5129]: E1211 17:21:08.521226 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.405828 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60322: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: E1211 17:21:09.521137 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.573944 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60334: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.589340 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60336: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.626123 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60338: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.639958 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60348: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.657632 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60350: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.672276 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60364: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.689469 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60380: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.704183 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60392: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.875733 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60406: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.891671 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60416: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.922015 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60430: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.937138 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60444: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.954899 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60448: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.970201 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60464: no serving certificate available for the kubelet" Dec 11 17:21:09 crc kubenswrapper[5129]: I1211 17:21:09.990170 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60474: no serving certificate available for the kubelet" Dec 11 17:21:10 crc kubenswrapper[5129]: I1211 17:21:10.005600 5129 ???:1] "http: TLS handshake error from 192.168.126.11:60480: no serving certificate available for the kubelet" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.048133 5129 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-68q8g/must-gather-5ntlt"] Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.049123 5129 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-68q8g/must-gather-5ntlt" podUID="3486d756-b2a2-474b-90b8-c521e601778f" containerName="copy" containerID="cri-o://997f77b0f0b599b1750e3d9f0b94d8e9274078ee84d6d8282707882542c9c3c1" gracePeriod=2 Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.057850 5129 status_manager.go:895] "Failed to get status for pod" podUID="3486d756-b2a2-474b-90b8-c521e601778f" pod="openshift-must-gather-68q8g/must-gather-5ntlt" err="pods \"must-gather-5ntlt\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-68q8g\": no relationship found between node 'crc' and this object" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.061174 5129 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-68q8g/must-gather-5ntlt"] Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.424677 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-68q8g_must-gather-5ntlt_3486d756-b2a2-474b-90b8-c521e601778f/copy/0.log" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.425747 5129 generic.go:358] "Generic (PLEG): container finished" podID="3486d756-b2a2-474b-90b8-c521e601778f" containerID="997f77b0f0b599b1750e3d9f0b94d8e9274078ee84d6d8282707882542c9c3c1" exitCode=143 Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.425869 5129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e6f99b74e0eea3151adf936c3c1f1f27dab678fbd2d314cfb1b837b4e5de598" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.449939 5129 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-68q8g_must-gather-5ntlt_3486d756-b2a2-474b-90b8-c521e601778f/copy/0.log" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.450274 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.451461 5129 status_manager.go:895] "Failed to get status for pod" podUID="3486d756-b2a2-474b-90b8-c521e601778f" pod="openshift-must-gather-68q8g/must-gather-5ntlt" err="pods \"must-gather-5ntlt\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-68q8g\": no relationship found between node 'crc' and this object" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.505600 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl7ps\" (UniqueName: \"kubernetes.io/projected/3486d756-b2a2-474b-90b8-c521e601778f-kube-api-access-sl7ps\") pod \"3486d756-b2a2-474b-90b8-c521e601778f\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.506031 5129 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3486d756-b2a2-474b-90b8-c521e601778f-must-gather-output\") pod \"3486d756-b2a2-474b-90b8-c521e601778f\" (UID: \"3486d756-b2a2-474b-90b8-c521e601778f\") " Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.516767 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3486d756-b2a2-474b-90b8-c521e601778f-kube-api-access-sl7ps" (OuterVolumeSpecName: "kube-api-access-sl7ps") pod "3486d756-b2a2-474b-90b8-c521e601778f" (UID: "3486d756-b2a2-474b-90b8-c521e601778f"). InnerVolumeSpecName "kube-api-access-sl7ps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.551096 5129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3486d756-b2a2-474b-90b8-c521e601778f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3486d756-b2a2-474b-90b8-c521e601778f" (UID: "3486d756-b2a2-474b-90b8-c521e601778f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.608129 5129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sl7ps\" (UniqueName: \"kubernetes.io/projected/3486d756-b2a2-474b-90b8-c521e601778f-kube-api-access-sl7ps\") on node \"crc\" DevicePath \"\"" Dec 11 17:21:15 crc kubenswrapper[5129]: I1211 17:21:15.608169 5129 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3486d756-b2a2-474b-90b8-c521e601778f-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 17:21:16 crc kubenswrapper[5129]: I1211 17:21:16.432298 5129 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-68q8g/must-gather-5ntlt" Dec 11 17:21:16 crc kubenswrapper[5129]: I1211 17:21:16.434445 5129 status_manager.go:895] "Failed to get status for pod" podUID="3486d756-b2a2-474b-90b8-c521e601778f" pod="openshift-must-gather-68q8g/must-gather-5ntlt" err="pods \"must-gather-5ntlt\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-68q8g\": no relationship found between node 'crc' and this object" Dec 11 17:21:16 crc kubenswrapper[5129]: I1211 17:21:16.448124 5129 status_manager.go:895] "Failed to get status for pod" podUID="3486d756-b2a2-474b-90b8-c521e601778f" pod="openshift-must-gather-68q8g/must-gather-5ntlt" err="pods \"must-gather-5ntlt\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-68q8g\": no relationship found between node 'crc' and this object" Dec 11 17:21:16 crc kubenswrapper[5129]: I1211 17:21:16.526734 5129 status_manager.go:895] "Failed to get status for pod" podUID="3486d756-b2a2-474b-90b8-c521e601778f" pod="openshift-must-gather-68q8g/must-gather-5ntlt" err="pods \"must-gather-5ntlt\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-68q8g\": no relationship found between node 'crc' and this object" Dec 11 17:21:16 crc kubenswrapper[5129]: I1211 17:21:16.529707 5129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3486d756-b2a2-474b-90b8-c521e601778f" path="/var/lib/kubelet/pods/3486d756-b2a2-474b-90b8-c521e601778f/volumes" Dec 11 17:21:21 crc kubenswrapper[5129]: E1211 17:21:21.520999 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:21:22 crc kubenswrapper[5129]: E1211 17:21:22.520505 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:21:35 crc kubenswrapper[5129]: E1211 17:21:35.520798 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:21:35 crc kubenswrapper[5129]: E1211 17:21:35.520922 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:21:49 crc kubenswrapper[5129]: E1211 17:21:49.522140 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:21:50 crc kubenswrapper[5129]: E1211 17:21:50.528403 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:21:57 crc kubenswrapper[5129]: I1211 17:21:57.552461 5129 ???:1] "http: TLS handshake error from 192.168.126.11:49702: no serving certificate available for the kubelet" Dec 11 17:22:01 crc kubenswrapper[5129]: E1211 17:22:01.521233 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:22:01 crc kubenswrapper[5129]: E1211 17:22:01.521317 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:22:13 crc kubenswrapper[5129]: E1211 17:22:13.521010 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:22:16 crc kubenswrapper[5129]: E1211 17:22:16.527509 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:22:28 crc kubenswrapper[5129]: E1211 17:22:28.521604 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:22:31 crc kubenswrapper[5129]: E1211 17:22:31.521874 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c" Dec 11 17:22:38 crc kubenswrapper[5129]: I1211 17:22:38.947463 5129 patch_prober.go:28] interesting pod/machine-config-daemon-9gtgq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 17:22:38 crc kubenswrapper[5129]: I1211 17:22:38.947673 5129 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9gtgq" podUID="b9f3b447-4c51-44f3-9ade-21b54c3a6daf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 17:22:41 crc kubenswrapper[5129]: E1211 17:22:41.522014 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-mqml2" podUID="90e4f2f1-2390-4d7b-a33b-28cc0714f188" Dec 11 17:22:43 crc kubenswrapper[5129]: E1211 17:22:43.520566 5129 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-czsmf" podUID="2b343d47-5ac2-4494-be36-d38785b71e3c"