Dec 08 19:29:07 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 19:29:07 crc kubenswrapper[5120]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5120]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 19:29:07 crc kubenswrapper[5120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5120]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 19:29:07 crc kubenswrapper[5120]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.455435 5120 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459832 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459865 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459875 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459883 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459892 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459900 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459907 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459915 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459922 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459936 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459943 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459951 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459958 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459965 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459973 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459980 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459988 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.459996 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460004 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460011 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460018 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460024 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460032 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460038 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460045 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460053 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460060 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460067 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460074 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460081 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460088 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460095 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460102 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460110 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460117 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460124 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460131 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460141 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460149 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460156 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460195 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460216 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460223 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460230 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460237 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460244 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460251 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460258 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460265 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460273 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460280 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460287 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460294 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460304 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460314 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460322 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460330 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460337 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460344 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460350 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460357 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460365 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460372 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460379 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460386 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460392 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460400 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460407 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460416 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460423 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460430 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460437 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460444 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460451 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460470 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460489 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460496 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460504 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460510 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460520 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460529 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460537 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460545 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460553 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460560 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.460568 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461434 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461448 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461456 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461463 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461470 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461477 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461484 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461492 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461499 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461506 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461513 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461519 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461527 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461534 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461547 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461554 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461561 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461569 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461579 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461588 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461597 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461605 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461613 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461621 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461629 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461637 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461644 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461652 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461659 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461666 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461674 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461680 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461687 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461694 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461701 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461708 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461715 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461722 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461730 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461736 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461743 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461751 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461758 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461765 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461771 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461778 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461794 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461803 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461811 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461819 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461825 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461832 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461839 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461847 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461854 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461861 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461868 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461875 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461882 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461891 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461898 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461908 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461914 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461922 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461929 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461936 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461943 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461950 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461957 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461964 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461971 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461978 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461985 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.461992 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462000 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462007 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462014 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462020 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462031 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462038 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462045 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462053 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462061 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462068 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462075 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.462083 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462244 5120 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462260 5120 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462274 5120 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462285 5120 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462295 5120 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462304 5120 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462314 5120 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462324 5120 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462332 5120 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462340 5120 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462349 5120 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462357 5120 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462365 5120 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462373 5120 flags.go:64] FLAG: --cgroup-root="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462380 5120 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462389 5120 flags.go:64] FLAG: --client-ca-file="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462396 5120 flags.go:64] FLAG: --cloud-config="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462404 5120 flags.go:64] FLAG: --cloud-provider="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462411 5120 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462420 5120 flags.go:64] FLAG: --cluster-domain="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462428 5120 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462436 5120 flags.go:64] FLAG: --config-dir="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462443 5120 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462452 5120 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462462 5120 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462471 5120 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462479 5120 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462487 5120 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462494 5120 flags.go:64] FLAG: --contention-profiling="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462502 5120 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462510 5120 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462518 5120 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462526 5120 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462538 5120 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462546 5120 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462554 5120 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462562 5120 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462570 5120 flags.go:64] FLAG: --enable-server="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462578 5120 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462589 5120 flags.go:64] FLAG: --event-burst="100" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462597 5120 flags.go:64] FLAG: --event-qps="50" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462605 5120 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462613 5120 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462621 5120 flags.go:64] FLAG: --eviction-hard="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462630 5120 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462638 5120 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462646 5120 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462654 5120 flags.go:64] FLAG: --eviction-soft="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462662 5120 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462670 5120 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462678 5120 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462686 5120 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462694 5120 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462701 5120 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462709 5120 flags.go:64] FLAG: --feature-gates="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462719 5120 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462728 5120 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462736 5120 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462744 5120 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462752 5120 flags.go:64] FLAG: --healthz-port="10248" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462760 5120 flags.go:64] FLAG: --help="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462767 5120 flags.go:64] FLAG: --hostname-override="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462775 5120 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462784 5120 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462792 5120 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462800 5120 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462827 5120 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462835 5120 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462843 5120 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462851 5120 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462858 5120 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462866 5120 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462875 5120 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462882 5120 flags.go:64] FLAG: --kube-reserved="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462890 5120 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462897 5120 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462906 5120 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462913 5120 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462921 5120 flags.go:64] FLAG: --lock-file="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462928 5120 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462936 5120 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462944 5120 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462956 5120 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462964 5120 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462971 5120 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462979 5120 flags.go:64] FLAG: --logging-format="text" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462987 5120 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.462996 5120 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463004 5120 flags.go:64] FLAG: --manifest-url="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463012 5120 flags.go:64] FLAG: --manifest-url-header="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463022 5120 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463029 5120 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463039 5120 flags.go:64] FLAG: --max-pods="110" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463047 5120 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463055 5120 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463063 5120 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463071 5120 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463078 5120 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463087 5120 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463096 5120 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463114 5120 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463121 5120 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463130 5120 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463139 5120 flags.go:64] FLAG: --pod-cidr="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463147 5120 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463185 5120 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463193 5120 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463217 5120 flags.go:64] FLAG: --pods-per-core="0" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463226 5120 flags.go:64] FLAG: --port="10250" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463234 5120 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463241 5120 flags.go:64] FLAG: --provider-id="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463249 5120 flags.go:64] FLAG: --qos-reserved="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463257 5120 flags.go:64] FLAG: --read-only-port="10255" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463265 5120 flags.go:64] FLAG: --register-node="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463273 5120 flags.go:64] FLAG: --register-schedulable="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463280 5120 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463294 5120 flags.go:64] FLAG: --registry-burst="10" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463301 5120 flags.go:64] FLAG: --registry-qps="5" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463309 5120 flags.go:64] FLAG: --reserved-cpus="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463317 5120 flags.go:64] FLAG: --reserved-memory="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463336 5120 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463344 5120 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463352 5120 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463360 5120 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463370 5120 flags.go:64] FLAG: --runonce="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463378 5120 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463386 5120 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463394 5120 flags.go:64] FLAG: --seccomp-default="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463405 5120 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463413 5120 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463422 5120 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463429 5120 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463439 5120 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463446 5120 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463454 5120 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463462 5120 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463469 5120 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463478 5120 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463485 5120 flags.go:64] FLAG: --system-cgroups="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463493 5120 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463506 5120 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463514 5120 flags.go:64] FLAG: --tls-cert-file="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463522 5120 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463531 5120 flags.go:64] FLAG: --tls-min-version="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463538 5120 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463546 5120 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463554 5120 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463562 5120 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463570 5120 flags.go:64] FLAG: --v="2" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463580 5120 flags.go:64] FLAG: --version="false" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463590 5120 flags.go:64] FLAG: --vmodule="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463600 5120 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.463608 5120 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463788 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463797 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463807 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463817 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463826 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463835 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463844 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463858 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463867 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463876 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463885 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463894 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463908 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463919 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463928 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463937 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463946 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463954 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463961 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463968 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463975 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463982 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463989 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.463996 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464004 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464011 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464019 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464026 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464033 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464040 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464047 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464054 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464062 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464070 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464077 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464084 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464091 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464099 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464106 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464117 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464123 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464131 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464138 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464145 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464152 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464192 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464200 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464207 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464214 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464222 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464228 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464236 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464244 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464251 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464258 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464265 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464276 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464285 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464293 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464302 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464310 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464318 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464325 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464338 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464345 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464353 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464360 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464367 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464373 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464380 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464388 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464400 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464407 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464415 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464422 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464429 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464436 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464443 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464451 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464459 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464466 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464473 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464480 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464487 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464494 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.464502 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.464752 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.475478 5120 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.475512 5120 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475565 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475573 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475576 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475580 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475584 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475587 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475591 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475595 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475600 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475607 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475611 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475616 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475620 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475623 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475626 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475630 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475633 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475637 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475640 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475643 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475646 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475650 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475653 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475656 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475659 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475663 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475666 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475669 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475672 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475675 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475679 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475684 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475687 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475691 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475694 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475697 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475701 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475704 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475707 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475711 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475714 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475718 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475721 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475725 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475728 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475731 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475734 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475737 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475741 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475744 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475747 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475750 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475753 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475757 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475760 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475763 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475766 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475769 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475772 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475775 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475779 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475782 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475785 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475788 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475792 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475795 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475798 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475801 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475805 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475808 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475811 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475814 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475817 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475822 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475825 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475829 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475832 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475836 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475839 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475843 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475847 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475851 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475854 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475858 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475861 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475864 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.475871 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475985 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475991 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475995 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.475999 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476003 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476006 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476009 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476013 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476016 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476019 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476023 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476027 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476030 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476033 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476037 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476040 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476044 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476049 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476053 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476056 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476059 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476063 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476066 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476070 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476073 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476076 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476080 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476083 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476087 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476090 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476093 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476096 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476100 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476103 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476106 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476109 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476112 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476115 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476119 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476122 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476125 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476128 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476132 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476136 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476139 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476143 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476147 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476150 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476154 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476158 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476175 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476179 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476182 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476187 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476191 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476196 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476201 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476205 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476210 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476215 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476219 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476224 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476228 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476231 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476235 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476238 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476241 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476244 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476248 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476251 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476254 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476258 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476261 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476265 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476268 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476272 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476276 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476279 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476282 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476286 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476289 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476293 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476296 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476299 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476303 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:07 crc kubenswrapper[5120]: W1208 19:29:07.476306 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.476311 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.476669 5120 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.479104 5120 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.483815 5120 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.483994 5120 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.484912 5120 server.go:1019] "Starting client certificate rotation" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.485095 5120 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.485234 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.494913 5120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.499620 5120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.499635 5120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.508135 5120 log.go:25] "Validated CRI v1 runtime API" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.530780 5120 log.go:25] "Validated CRI v1 image API" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.532527 5120 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.535268 5120 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-19-23-14-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.535312 5120 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.559783 5120 manager.go:217] Machine: {Timestamp:2025-12-08 19:29:07.557830765 +0000 UTC m=+0.229937494 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:75177e4e-6d2e-4439-a8d9-c238b596e121 BootID:9b47515e-a1bf-4035-b74a-f035e64eeafd Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5e:57:9f Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5e:57:9f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ce:f8:58 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ac:01:2f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:95:86:ea Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a0:0e:73 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:5f:e5:ed:ff:ab Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c6:66:73:0f:86:57 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.560250 5120 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.560414 5120 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.561247 5120 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.561285 5120 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.561475 5120 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.561487 5120 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.561511 5120 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.561745 5120 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.562096 5120 state_mem.go:36] "Initialized new in-memory state store" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.562273 5120 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.562815 5120 kubelet.go:491] "Attempting to sync node with API server" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.562834 5120 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.562849 5120 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.562863 5120 kubelet.go:397] "Adding apiserver pod source" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.562882 5120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.568306 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.568265 5120 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.568445 5120 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.570778 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.572298 5120 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.572344 5120 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.574060 5120 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.574392 5120 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.574935 5120 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575446 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575472 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575481 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575489 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575498 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575506 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575521 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575529 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575539 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575553 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575566 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575724 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575966 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.575982 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.577146 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.586121 5120 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.586207 5120 server.go:1295] "Started kubelet" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.586558 5120 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.586828 5120 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.587010 5120 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 19:29:07 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.587976 5120 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.588531 5120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.588619 5120 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.589209 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="200ms" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.589420 5120 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.589573 5120 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.589613 5120 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.589721 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.589862 5120 server.go:317] "Adding debug handlers to kubelet server" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.590195 5120 factory.go:55] Registering systemd factory Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.590238 5120 factory.go:223] Registration of the systemd container factory successfully Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.589774 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f5430d0863238 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.586142776 +0000 UTC m=+0.258249425,LastTimestamp:2025-12-08 19:29:07.586142776 +0000 UTC m=+0.258249425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.591321 5120 factory.go:153] Registering CRI-O factory Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.591404 5120 factory.go:223] Registration of the crio container factory successfully Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.591542 5120 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.591632 5120 factory.go:103] Registering Raw factory Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.591709 5120 manager.go:1196] Started watching for new ooms in manager Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.592409 5120 manager.go:319] Starting recovery of all containers Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.593555 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624223 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624270 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624283 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624294 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624305 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624315 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624324 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624358 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624370 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624367 5120 manager.go:324] Recovery completed Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624383 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624577 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624597 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624610 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624621 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624633 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624642 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624652 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624662 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624672 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624682 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624691 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624721 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624731 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624740 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624750 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624759 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624771 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624782 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624795 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624804 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624815 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624825 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624835 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624846 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624856 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624865 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624874 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624886 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624897 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624906 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624916 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624946 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624959 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624975 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624985 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.624996 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625007 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625017 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625027 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625037 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625047 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625057 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625086 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625095 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625105 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625114 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625131 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625142 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625153 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625181 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625190 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625200 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625211 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625221 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625254 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625265 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625276 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625287 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625299 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625309 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625319 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625330 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625343 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625353 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625364 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625378 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625388 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625398 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625426 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625435 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625445 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625455 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625465 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625474 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625485 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625497 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625507 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625517 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625561 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625573 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625583 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625594 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625604 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625614 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625625 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625636 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625646 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625655 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625664 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625675 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625686 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625695 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625705 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625715 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625724 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625733 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625743 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625756 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625788 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625800 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625810 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625820 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625843 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625852 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625863 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625874 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625884 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.625896 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626749 5120 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626775 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626827 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626840 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626851 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626862 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626894 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626903 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626915 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626925 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626935 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626946 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626956 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626967 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626977 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626987 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.626998 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627008 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627019 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627091 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627103 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627119 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627130 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627142 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627153 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627218 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627229 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627239 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627250 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627261 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627273 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627284 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627295 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627305 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627315 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627325 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627335 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627364 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627375 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627384 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627393 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627402 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627413 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627423 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627434 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627445 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627455 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627465 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627475 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627487 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627498 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627508 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627520 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627535 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627545 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627557 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627568 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627578 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627588 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627599 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627610 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627620 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627630 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627640 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627650 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627663 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627673 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627683 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627693 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627704 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627714 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627724 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627735 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627744 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627754 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627764 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627774 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627784 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627793 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627802 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627813 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627822 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627832 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627844 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627853 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627867 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627877 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627887 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627898 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627908 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627918 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627929 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627939 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627949 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627959 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627968 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627980 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.627991 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628001 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628011 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628021 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628033 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628043 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628054 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628064 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628074 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628085 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628120 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628131 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628141 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628151 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628176 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628186 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628196 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628206 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628216 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628227 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628247 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628257 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628266 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628275 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628285 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628296 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628306 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628316 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628327 5120 reconstruct.go:97] "Volume reconstruction finished" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.628334 5120 reconciler.go:26] "Reconciler: start to sync state" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.641914 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.643578 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.643616 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.643630 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.652304 5120 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.652340 5120 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.652367 5120 state_mem.go:36] "Initialized new in-memory state store" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.655729 5120 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.658435 5120 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.658472 5120 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.658491 5120 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.658499 5120 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.658576 5120 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.659013 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.660461 5120 policy_none.go:49] "None policy: Start" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.660489 5120 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.660505 5120 state_mem.go:35] "Initializing new in-memory state store" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.690110 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.718419 5120 manager.go:341] "Starting Device Plugin manager" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.718650 5120 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.718669 5120 server.go:85] "Starting device plugin registration server" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.719026 5120 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.719036 5120 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.719219 5120 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.719286 5120 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.719291 5120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.722791 5120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.722837 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.759118 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.759369 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.760120 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.760155 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.760179 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.760646 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.760875 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.760926 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.761594 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.761639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.761650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.761607 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.761682 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.761697 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.762682 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.762842 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.762930 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.763122 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.763145 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.763155 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.763567 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.763597 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.763612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.763809 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.764097 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.764129 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.764265 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.764300 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.764313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765049 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765187 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765210 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765335 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765425 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765740 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765762 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765770 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765813 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765841 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.765851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.766519 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.766559 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.767126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.767151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.767175 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.789833 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="400ms" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.794526 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.801336 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.819286 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.820247 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.820286 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.820299 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.820322 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.820751 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.822362 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830726 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830758 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830872 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830906 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830926 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830943 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830961 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.830977 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831015 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831078 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831096 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831110 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831223 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831269 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831293 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831380 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831414 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831440 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831461 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831481 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831502 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831525 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831718 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831804 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831851 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831838 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831900 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.831872 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.832138 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.845529 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:07 crc kubenswrapper[5120]: E1208 19:29:07.853089 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932687 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932740 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932767 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932877 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932954 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932975 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932979 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932911 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.932995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933025 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933029 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933047 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933061 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933092 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933081 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933083 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933128 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933201 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933208 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933134 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933215 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933243 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933265 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933271 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933288 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933308 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933313 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933329 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933343 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933350 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:07 crc kubenswrapper[5120]: I1208 19:29:07.933453 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.021596 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.022861 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.022931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.022950 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.022987 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:08 crc kubenswrapper[5120]: E1208 19:29:08.023641 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.094925 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.102224 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.123596 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5120]: W1208 19:29:08.133882 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-47825a53aef61980709089595b9b5af075077d0f259f23fda5733924213a7c33 WatchSource:0}: Error finding container 47825a53aef61980709089595b9b5af075077d0f259f23fda5733924213a7c33: Status 404 returned error can't find the container with id 47825a53aef61980709089595b9b5af075077d0f259f23fda5733924213a7c33 Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.144222 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.145905 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.153798 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:08 crc kubenswrapper[5120]: W1208 19:29:08.163334 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-926391ec8e99d9cbbe3b3bc246b8fc19d4d90e5f5ce93b42642736f002f03fbe WatchSource:0}: Error finding container 926391ec8e99d9cbbe3b3bc246b8fc19d4d90e5f5ce93b42642736f002f03fbe: Status 404 returned error can't find the container with id 926391ec8e99d9cbbe3b3bc246b8fc19d4d90e5f5ce93b42642736f002f03fbe Dec 08 19:29:08 crc kubenswrapper[5120]: E1208 19:29:08.190922 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="800ms" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.424499 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.426097 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.426143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.426158 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.426213 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:08 crc kubenswrapper[5120]: E1208 19:29:08.426824 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Dec 08 19:29:08 crc kubenswrapper[5120]: E1208 19:29:08.505306 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.577983 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Dec 08 19:29:08 crc kubenswrapper[5120]: E1208 19:29:08.646190 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.661573 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"47825a53aef61980709089595b9b5af075077d0f259f23fda5733924213a7c33"} Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.662758 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c041dcfa8c277f543a84f6cdcc9b3e4308d0cdffb0ca7fa4138189816843f678"} Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.663519 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"202946c01d40bad3b78bc06e1a21b847b56b4eb48335e29c70334baa869f1245"} Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.664262 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"926391ec8e99d9cbbe3b3bc246b8fc19d4d90e5f5ce93b42642736f002f03fbe"} Dec 08 19:29:08 crc kubenswrapper[5120]: I1208 19:29:08.665010 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"4ac6de683d4a031bf5dc1cfdde206f36eb3c5ee6626bc7d72c65c010384d857a"} Dec 08 19:29:08 crc kubenswrapper[5120]: E1208 19:29:08.868941 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:08 crc kubenswrapper[5120]: E1208 19:29:08.991489 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="1.6s" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.081614 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.227606 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.229268 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.229342 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.229362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.229401 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.229979 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.578355 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.636228 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.637735 5120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.670154 5120 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e" exitCode=0 Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.670468 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.670746 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.671946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.672000 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.672023 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.672397 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.675553 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"cc7b415ceb349ee0dda497067d9c98ce687d91cbe06240e8064168aaf47e84c7"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.675612 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e9b3b16166a06cb8d2dbfcb59f281d684c99fddf8dd2dcc530a0b297cfe79907"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.675633 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"559e47716f6751afd73ab762aff2e0b80c0bf1d6a390a03122d9e9a7d55c06ae"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.675649 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.675831 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.678676 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.678729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.678754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.679044 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.681162 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb" exitCode=0 Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.681289 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.681535 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.682398 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.682456 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.682476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.682755 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.688342 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.689498 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.689568 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.689594 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.689625 5120 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be" exitCode=0 Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.689833 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.690066 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.691029 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.691088 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.691113 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.691443 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.692065 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04" exitCode=0 Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.692137 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04"} Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.692299 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.694585 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.694654 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.694690 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5120]: E1208 19:29:09.695076 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:09 crc kubenswrapper[5120]: I1208 19:29:09.911079 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.696944 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.696994 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.697008 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.697040 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.698343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.698424 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.698447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5120]: E1208 19:29:10.698777 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.701929 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.701963 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.701976 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.701986 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.704367 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"828d41712fd94eedcd50f38c38d43fd474f30a968c24771c2db5da65f26e2b24"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.704516 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.705328 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.705366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.705379 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5120]: E1208 19:29:10.705635 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.710607 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9" exitCode=0 Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.710649 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9"} Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.710760 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.710990 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.711253 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.711279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.711288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5120]: E1208 19:29:10.711544 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.712031 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.712060 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.712071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5120]: E1208 19:29:10.712260 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.834175 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.844959 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.845024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.845037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5120]: I1208 19:29:10.845075 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.279256 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.717986 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6" exitCode=0 Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.718121 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6"} Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.718872 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.719969 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.720017 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.720037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5120]: E1208 19:29:11.720386 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.724207 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.724288 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.724315 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.724157 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"510f33ad831ac55ee5a535adcdaa10a25c972c8de2648113389d854671db5e52"} Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.724364 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.724910 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725382 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725529 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725558 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725586 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.725660 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5120]: E1208 19:29:11.725770 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5120]: E1208 19:29:11.726029 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5120]: E1208 19:29:11.726354 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.727521 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.727587 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5120]: I1208 19:29:11.727615 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5120]: E1208 19:29:11.729371 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.620062 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732268 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96"} Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732379 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc"} Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732417 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec"} Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732435 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be"} Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732450 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732506 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732450 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.732574 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733474 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733533 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733557 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733571 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733594 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733869 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733932 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5120]: I1208 19:29:12.733956 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5120]: E1208 19:29:12.734037 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5120]: E1208 19:29:12.734823 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5120]: E1208 19:29:12.735344 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.713644 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.741516 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7"} Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.741679 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.741785 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.742614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.742663 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.742660 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.742691 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.742709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:13 crc kubenswrapper[5120]: I1208 19:29:13.742725 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:13 crc kubenswrapper[5120]: E1208 19:29:13.743203 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:13 crc kubenswrapper[5120]: E1208 19:29:13.743721 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.279721 5120 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.279878 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.519213 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.743871 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.743872 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.744984 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.745052 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.745071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.745213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.745266 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:14 crc kubenswrapper[5120]: I1208 19:29:14.745294 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:14 crc kubenswrapper[5120]: E1208 19:29:14.745697 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:14 crc kubenswrapper[5120]: E1208 19:29:14.746212 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.255681 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.559253 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.747850 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.748475 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.749289 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.749334 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.749356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5120]: E1208 19:29:15.750000 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.750049 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.750120 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.750139 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5120]: E1208 19:29:15.750780 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.911421 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.911728 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.912868 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.912991 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5120]: I1208 19:29:15.913019 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5120]: E1208 19:29:15.913619 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.668268 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.668549 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.669674 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.669729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.669750 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:17 crc kubenswrapper[5120]: E1208 19:29:17.670288 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.675356 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:17 crc kubenswrapper[5120]: E1208 19:29:17.723648 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.755009 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.755931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.755985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:17 crc kubenswrapper[5120]: I1208 19:29:17.755998 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:17 crc kubenswrapper[5120]: E1208 19:29:17.756420 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.378518 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.378872 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.380185 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.380251 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.380268 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:20 crc kubenswrapper[5120]: E1208 19:29:20.380876 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.579337 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 08 19:29:20 crc kubenswrapper[5120]: E1208 19:29:20.593037 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.647010 5120 trace.go:236] Trace[1644035208]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:10.645) (total time: 10001ms): Dec 08 19:29:20 crc kubenswrapper[5120]: Trace[1644035208]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:29:20.646) Dec 08 19:29:20 crc kubenswrapper[5120]: Trace[1644035208]: [10.001415582s] [10.001415582s] END Dec 08 19:29:20 crc kubenswrapper[5120]: E1208 19:29:20.647066 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:20 crc kubenswrapper[5120]: I1208 19:29:20.655534 5120 trace.go:236] Trace[938525056]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:10.654) (total time: 10001ms): Dec 08 19:29:20 crc kubenswrapper[5120]: Trace[938525056]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:29:20.655) Dec 08 19:29:20 crc kubenswrapper[5120]: Trace[938525056]: [10.001469325s] [10.001469325s] END Dec 08 19:29:20 crc kubenswrapper[5120]: E1208 19:29:20.655579 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:20 crc kubenswrapper[5120]: E1208 19:29:20.846363 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 08 19:29:21 crc kubenswrapper[5120]: I1208 19:29:21.532343 5120 trace.go:236] Trace[899413242]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:11.529) (total time: 10002ms): Dec 08 19:29:21 crc kubenswrapper[5120]: Trace[899413242]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:29:21.532) Dec 08 19:29:21 crc kubenswrapper[5120]: Trace[899413242]: [10.002435939s] [10.002435939s] END Dec 08 19:29:21 crc kubenswrapper[5120]: E1208 19:29:21.532409 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:21 crc kubenswrapper[5120]: I1208 19:29:21.781985 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:21 crc kubenswrapper[5120]: I1208 19:29:21.782068 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 19:29:21 crc kubenswrapper[5120]: I1208 19:29:21.791947 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:21 crc kubenswrapper[5120]: I1208 19:29:21.792227 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 19:29:23 crc kubenswrapper[5120]: E1208 19:29:23.796346 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.046752 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.048103 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.048411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.048570 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.048786 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:24 crc kubenswrapper[5120]: E1208 19:29:24.065598 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.280324 5120 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.280712 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.524039 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.524236 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.525036 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.525088 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.525101 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:24 crc kubenswrapper[5120]: E1208 19:29:24.525528 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.529066 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.774089 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.775259 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.775345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:24 crc kubenswrapper[5120]: I1208 19:29:24.775369 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:24 crc kubenswrapper[5120]: E1208 19:29:24.776090 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:25 crc kubenswrapper[5120]: E1208 19:29:25.217103 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:25 crc kubenswrapper[5120]: E1208 19:29:25.410738 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.409215 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.785244 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d0863238 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.586142776 +0000 UTC m=+0.258249425,LastTimestamp:2025-12-08 19:29:07.586142776 +0000 UTC m=+0.258249425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.786712 5120 trace.go:236] Trace[1211781731]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:12.111) (total time: 14674ms): Dec 08 19:29:26 crc kubenswrapper[5120]: Trace[1211781731]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14674ms (19:29:26.786) Dec 08 19:29:26 crc kubenswrapper[5120]: Trace[1211781731]: [14.67485216s] [14.67485216s] END Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.786746 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.787136 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.788833 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.790363 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.794557 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.794576 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f37a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,LastTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.800382 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d8a2009b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.722182811 +0000 UTC m=+0.394289460,LastTimestamp:2025-12-08 19:29:07.722182811 +0000 UTC m=+0.394289460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.807288 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f2fe00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.760141515 +0000 UTC m=+0.432248164,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.815239 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f3464e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.760174906 +0000 UTC m=+0.432281555,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.822559 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f37a9d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f37a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,LastTimestamp:2025-12-08 19:29:07.760185176 +0000 UTC m=+0.432291825,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.829733 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f2fe00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.761628966 +0000 UTC m=+0.433735605,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.841416 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.842095 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49556->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.842406 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49556->192.168.126.11:17697: read: connection reset by peer" Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.842098 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.842452 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f3464e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.761645577 +0000 UTC m=+0.433752226,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.847914 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49584->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 19:29:26 crc kubenswrapper[5120]: I1208 19:29:26.847979 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49584->192.168.126.11:17697: read: connection reset by peer" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.850019 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f37a9d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f37a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,LastTimestamp:2025-12-08 19:29:07.761654217 +0000 UTC m=+0.433760866,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.857924 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f2fe00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.761667828 +0000 UTC m=+0.433774487,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.868414 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f3464e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.761691458 +0000 UTC m=+0.433798117,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.873441 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f37a9d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f37a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,LastTimestamp:2025-12-08 19:29:07.761703549 +0000 UTC m=+0.433810208,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.878705 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f2fe00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.76313808 +0000 UTC m=+0.435244729,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.882832 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f3464e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.7631507 +0000 UTC m=+0.435257349,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.888127 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f37a9d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f37a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,LastTimestamp:2025-12-08 19:29:07.763177131 +0000 UTC m=+0.435283780,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.891418 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f2fe00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.763589893 +0000 UTC m=+0.435696562,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.892590 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f3464e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.763605023 +0000 UTC m=+0.435711692,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.895522 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f37a9d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f37a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,LastTimestamp:2025-12-08 19:29:07.763618203 +0000 UTC m=+0.435724862,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.901136 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f2fe00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.764284891 +0000 UTC m=+0.436391550,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.905515 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f3464e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.764307582 +0000 UTC m=+0.436414241,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.909530 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f37a9d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f37a9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.643636381 +0000 UTC m=+0.315743030,LastTimestamp:2025-12-08 19:29:07.764319482 +0000 UTC m=+0.436426141,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.914004 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f2fe00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f2fe00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64360448 +0000 UTC m=+0.315711129,LastTimestamp:2025-12-08 19:29:07.765405273 +0000 UTC m=+0.437511922,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.918606 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5430d3f3464e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5430d3f3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.64362299 +0000 UTC m=+0.315729639,LastTimestamp:2025-12-08 19:29:07.765474335 +0000 UTC m=+0.437580994,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.923715 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430f1cef4b2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.144559282 +0000 UTC m=+0.816665951,LastTimestamp:2025-12-08 19:29:08.144559282 +0000 UTC m=+0.816665951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.928300 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f5430f1d06aec openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.144655084 +0000 UTC m=+0.816761773,LastTimestamp:2025-12-08 19:29:08.144655084 +0000 UTC m=+0.816761773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.932610 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430f3729eb6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.17206239 +0000 UTC m=+0.844169059,LastTimestamp:2025-12-08 19:29:08.17206239 +0000 UTC m=+0.844169059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.937334 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5430f3b1cef2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.176203506 +0000 UTC m=+0.848310155,LastTimestamp:2025-12-08 19:29:08.176203506 +0000 UTC m=+0.848310155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.941422 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5430f3b36189 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.176306569 +0000 UTC m=+0.848413258,LastTimestamp:2025-12-08 19:29:08.176306569 +0000 UTC m=+0.848413258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.945789 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f543111136019 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.669136921 +0000 UTC m=+1.341243570,LastTimestamp:2025-12-08 19:29:08.669136921 +0000 UTC m=+1.341243570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.950749 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543111156769 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.669269865 +0000 UTC m=+1.341376524,LastTimestamp:2025-12-08 19:29:08.669269865 +0000 UTC m=+1.341376524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.955518 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54311187c8dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.676765917 +0000 UTC m=+1.348872566,LastTimestamp:2025-12-08 19:29:08.676765917 +0000 UTC m=+1.348872566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.960387 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f543111b72940 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.679870784 +0000 UTC m=+1.351977433,LastTimestamp:2025-12-08 19:29:08.679870784 +0000 UTC m=+1.351977433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.965098 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f543111d7f848 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.682020936 +0000 UTC m=+1.354127585,LastTimestamp:2025-12-08 19:29:08.682020936 +0000 UTC m=+1.354127585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.968969 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f543111e8657a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.683097466 +0000 UTC m=+1.355204125,LastTimestamp:2025-12-08 19:29:08.683097466 +0000 UTC m=+1.355204125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.978532 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543111f439b0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.683872688 +0000 UTC m=+1.355979347,LastTimestamp:2025-12-08 19:29:08.683872688 +0000 UTC m=+1.355979347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.984015 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54311258f16d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.690473325 +0000 UTC m=+1.362579984,LastTimestamp:2025-12-08 19:29:08.690473325 +0000 UTC m=+1.362579984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.990918 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54311278ca28 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.692560424 +0000 UTC m=+1.364667073,LastTimestamp:2025-12-08 19:29:08.692560424 +0000 UTC m=+1.364667073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5120]: E1208 19:29:26.996236 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f543112cb0ea2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.697951906 +0000 UTC m=+1.370058555,LastTimestamp:2025-12-08 19:29:08.697951906 +0000 UTC m=+1.370058555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.002588 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5431134164a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.705707175 +0000 UTC m=+1.377813834,LastTimestamp:2025-12-08 19:29:08.705707175 +0000 UTC m=+1.377813834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.008919 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54312386334d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.978651981 +0000 UTC m=+1.650758640,LastTimestamp:2025-12-08 19:29:08.978651981 +0000 UTC m=+1.650758640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.016554 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5431243ad6df openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.990490335 +0000 UTC m=+1.662596984,LastTimestamp:2025-12-08 19:29:08.990490335 +0000 UTC m=+1.662596984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.023857 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5431244fef9d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.991872925 +0000 UTC m=+1.663979584,LastTimestamp:2025-12-08 19:29:08.991872925 +0000 UTC m=+1.663979584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.034988 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54313c7209b2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.39676101 +0000 UTC m=+2.068867669,LastTimestamp:2025-12-08 19:29:09.39676101 +0000 UTC m=+2.068867669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.040127 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54313d53aff1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.411549169 +0000 UTC m=+2.083655828,LastTimestamp:2025-12-08 19:29:09.411549169 +0000 UTC m=+2.083655828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.044776 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54313d6ae209 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.413069321 +0000 UTC m=+2.085176010,LastTimestamp:2025-12-08 19:29:09.413069321 +0000 UTC m=+2.085176010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.048979 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54314b4cfba0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.645990816 +0000 UTC m=+2.318097505,LastTimestamp:2025-12-08 19:29:09.645990816 +0000 UTC m=+2.318097505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.052917 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54314c0f8554 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.658740052 +0000 UTC m=+2.330846741,LastTimestamp:2025-12-08 19:29:09.658740052 +0000 UTC m=+2.330846741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.057476 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54314d06444f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.674910799 +0000 UTC m=+2.347017488,LastTimestamp:2025-12-08 19:29:09.674910799 +0000 UTC m=+2.347017488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.063492 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54314dcdf760 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.687998304 +0000 UTC m=+2.360104993,LastTimestamp:2025-12-08 19:29:09.687998304 +0000 UTC m=+2.360104993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.068543 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54314e1c2e1d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.693124125 +0000 UTC m=+2.365230824,LastTimestamp:2025-12-08 19:29:09.693124125 +0000 UTC m=+2.365230824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.072847 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54314e640f67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.697834855 +0000 UTC m=+2.369941544,LastTimestamp:2025-12-08 19:29:09.697834855 +0000 UTC m=+2.369941544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.076436 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54315b871856 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.91823471 +0000 UTC m=+2.590341359,LastTimestamp:2025-12-08 19:29:09.91823471 +0000 UTC m=+2.590341359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.079760 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54315c0d4a8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.92702939 +0000 UTC m=+2.599136039,LastTimestamp:2025-12-08 19:29:09.92702939 +0000 UTC m=+2.599136039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.083442 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54315c21048b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.928322187 +0000 UTC m=+2.600428836,LastTimestamp:2025-12-08 19:29:09.928322187 +0000 UTC m=+2.600428836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.086779 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54315c260cfb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.928652027 +0000 UTC m=+2.600758696,LastTimestamp:2025-12-08 19:29:09.928652027 +0000 UTC m=+2.600758696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.090324 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54315c3484c3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.929600195 +0000 UTC m=+2.601706844,LastTimestamp:2025-12-08 19:29:09.929600195 +0000 UTC m=+2.601706844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.094742 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54315d195704 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.944596228 +0000 UTC m=+2.616702877,LastTimestamp:2025-12-08 19:29:09.944596228 +0000 UTC m=+2.616702877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.096205 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54315d3cffb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.946933177 +0000 UTC m=+2.619039826,LastTimestamp:2025-12-08 19:29:09.946933177 +0000 UTC m=+2.619039826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.099573 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54315d3f727f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.947093631 +0000 UTC m=+2.619200290,LastTimestamp:2025-12-08 19:29:09.947093631 +0000 UTC m=+2.619200290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.104240 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54315e2d55d4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.96268386 +0000 UTC m=+2.634790509,LastTimestamp:2025-12-08 19:29:09.96268386 +0000 UTC m=+2.634790509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.108658 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54315ebd0e34 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:09.972102708 +0000 UTC m=+2.644209357,LastTimestamp:2025-12-08 19:29:09.972102708 +0000 UTC m=+2.644209357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.112256 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543168260bc7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.129978311 +0000 UTC m=+2.802084960,LastTimestamp:2025-12-08 19:29:10.129978311 +0000 UTC m=+2.802084960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.116223 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543168b708b2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.139480242 +0000 UTC m=+2.811586891,LastTimestamp:2025-12-08 19:29:10.139480242 +0000 UTC m=+2.811586891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.120276 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543168da44c3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.141789379 +0000 UTC m=+2.813896028,LastTimestamp:2025-12-08 19:29:10.141789379 +0000 UTC m=+2.813896028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.124279 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54316a377b1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.164675354 +0000 UTC m=+2.836782003,LastTimestamp:2025-12-08 19:29:10.164675354 +0000 UTC m=+2.836782003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.127893 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54316b26c992 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.180358546 +0000 UTC m=+2.852465195,LastTimestamp:2025-12-08 19:29:10.180358546 +0000 UTC m=+2.852465195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.131620 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54316b3a70d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.181646544 +0000 UTC m=+2.853753193,LastTimestamp:2025-12-08 19:29:10.181646544 +0000 UTC m=+2.853753193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.135231 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5431757e82da openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.35387977 +0000 UTC m=+3.025986429,LastTimestamp:2025-12-08 19:29:10.35387977 +0000 UTC m=+3.025986429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.138843 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5431764fe465 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.367601765 +0000 UTC m=+3.039708414,LastTimestamp:2025-12-08 19:29:10.367601765 +0000 UTC m=+3.039708414,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.143066 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543178def75c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.4105327 +0000 UTC m=+3.082639359,LastTimestamp:2025-12-08 19:29:10.4105327 +0000 UTC m=+3.082639359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.146454 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543179bf02ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.425215743 +0000 UTC m=+3.097322412,LastTimestamp:2025-12-08 19:29:10.425215743 +0000 UTC m=+3.097322412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.149883 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543179cf788f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.426294415 +0000 UTC m=+3.098401064,LastTimestamp:2025-12-08 19:29:10.426294415 +0000 UTC m=+3.098401064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.154695 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543184b4bd2b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.609091883 +0000 UTC m=+3.281198532,LastTimestamp:2025-12-08 19:29:10.609091883 +0000 UTC m=+3.281198532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.159677 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543185193499 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.615676057 +0000 UTC m=+3.287782706,LastTimestamp:2025-12-08 19:29:10.615676057 +0000 UTC m=+3.287782706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.163790 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54318528104d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.616649805 +0000 UTC m=+3.288756454,LastTimestamp:2025-12-08 19:29:10.616649805 +0000 UTC m=+3.288756454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.168741 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54318afdc7ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.714542062 +0000 UTC m=+3.386648721,LastTimestamp:2025-12-08 19:29:10.714542062 +0000 UTC m=+3.386648721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.176097 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319181fa2d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.823868973 +0000 UTC m=+3.495975622,LastTimestamp:2025-12-08 19:29:10.823868973 +0000 UTC m=+3.495975622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.180621 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543192313ce4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.835354852 +0000 UTC m=+3.507461491,LastTimestamp:2025-12-08 19:29:10.835354852 +0000 UTC m=+3.507461491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.185443 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543197792466 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.923953254 +0000 UTC m=+3.596059903,LastTimestamp:2025-12-08 19:29:10.923953254 +0000 UTC m=+3.596059903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.189724 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431985178f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.938130672 +0000 UTC m=+3.610237321,LastTimestamp:2025-12-08 19:29:10.938130672 +0000 UTC m=+3.610237321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.194900 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431c71fd8aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.72340753 +0000 UTC m=+4.395514229,LastTimestamp:2025-12-08 19:29:11.72340753 +0000 UTC m=+4.395514229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.198704 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431d6e4d4fe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:11.987975422 +0000 UTC m=+4.660082091,LastTimestamp:2025-12-08 19:29:11.987975422 +0000 UTC m=+4.660082091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.199480 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431d7a0df24 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.000298788 +0000 UTC m=+4.672405457,LastTimestamp:2025-12-08 19:29:12.000298788 +0000 UTC m=+4.672405457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.203059 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431d7b23974 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.00143602 +0000 UTC m=+4.673542679,LastTimestamp:2025-12-08 19:29:12.00143602 +0000 UTC m=+4.673542679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.207151 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431e7b8365a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.270263898 +0000 UTC m=+4.942370587,LastTimestamp:2025-12-08 19:29:12.270263898 +0000 UTC m=+4.942370587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.211971 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431e8eb23f3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.290378739 +0000 UTC m=+4.962485418,LastTimestamp:2025-12-08 19:29:12.290378739 +0000 UTC m=+4.962485418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.215472 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431e8ff04a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.291681444 +0000 UTC m=+4.963788133,LastTimestamp:2025-12-08 19:29:12.291681444 +0000 UTC m=+4.963788133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.220220 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431f3cd6ba2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.472980386 +0000 UTC m=+5.145087035,LastTimestamp:2025-12-08 19:29:12.472980386 +0000 UTC m=+5.145087035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.223873 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431f49312cb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.485933771 +0000 UTC m=+5.158040430,LastTimestamp:2025-12-08 19:29:12.485933771 +0000 UTC m=+5.158040430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.228556 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5431f4a944a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.48738832 +0000 UTC m=+5.159494969,LastTimestamp:2025-12-08 19:29:12.48738832 +0000 UTC m=+5.159494969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.232959 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54320170e5f8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.70179788 +0000 UTC m=+5.373904529,LastTimestamp:2025-12-08 19:29:12.70179788 +0000 UTC m=+5.373904529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.238373 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5432028d2ba4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.72042794 +0000 UTC m=+5.392534629,LastTimestamp:2025-12-08 19:29:12.72042794 +0000 UTC m=+5.392534629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.242844 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543202a95e14 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.72227586 +0000 UTC m=+5.394382549,LastTimestamp:2025-12-08 19:29:12.72227586 +0000 UTC m=+5.394382549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.248579 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543212231e6d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.981913197 +0000 UTC m=+5.654019856,LastTimestamp:2025-12-08 19:29:12.981913197 +0000 UTC m=+5.654019856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.257750 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543213118ec5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:12.997539525 +0000 UTC m=+5.669646184,LastTimestamp:2025-12-08 19:29:12.997539525 +0000 UTC m=+5.669646184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.264096 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 19:29:27 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-controller-manager-crc.187f54325f7fe0bb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 19:29:27 crc kubenswrapper[5120]: body: Dec 08 19:29:27 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:14.279837883 +0000 UTC m=+6.951944582,LastTimestamp:2025-12-08 19:29:14.279837883 +0000 UTC m=+6.951944582,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5120]: > Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.273075 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54325f81a142 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:14.279952706 +0000 UTC m=+6.952059395,LastTimestamp:2025-12-08 19:29:14.279952706 +0000 UTC m=+6.952059395,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.279933 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.187f54341eaa58ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 19:29:27 crc kubenswrapper[5120]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:27 crc kubenswrapper[5120]: Dec 08 19:29:27 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:21.782036686 +0000 UTC m=+14.454143335,LastTimestamp:2025-12-08 19:29:21.782036686 +0000 UTC m=+14.454143335,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5120]: > Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.286734 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54341eab3d72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:21.782095218 +0000 UTC m=+14.454201867,LastTimestamp:2025-12-08 19:29:21.782095218 +0000 UTC m=+14.454201867,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.294352 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54341eaa58ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.187f54341eaa58ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 19:29:27 crc kubenswrapper[5120]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:27 crc kubenswrapper[5120]: Dec 08 19:29:27 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:21.782036686 +0000 UTC m=+14.454143335,LastTimestamp:2025-12-08 19:29:21.792153544 +0000 UTC m=+14.464260233,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5120]: > Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.305145 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54341eab3d72\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54341eab3d72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:21.782095218 +0000 UTC m=+14.454201867,LastTimestamp:2025-12-08 19:29:21.792325499 +0000 UTC m=+14.464432158,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.311171 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 19:29:27 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-controller-manager-crc.187f5434b3988da4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 08 19:29:27 crc kubenswrapper[5120]: body: Dec 08 19:29:27 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.280675748 +0000 UTC m=+16.952782437,LastTimestamp:2025-12-08 19:29:24.280675748 +0000 UTC m=+16.952782437,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5120]: > Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.317024 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5434b39ae058 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.280827992 +0000 UTC m=+16.952934661,LastTimestamp:2025-12-08 19:29:24.280827992 +0000 UTC m=+16.952934661,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.323639 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.187f54354c441f62 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": EOF Dec 08 19:29:27 crc kubenswrapper[5120]: body: Dec 08 19:29:27 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:26.842056546 +0000 UTC m=+19.514163195,LastTimestamp:2025-12-08 19:29:26.842056546 +0000 UTC m=+19.514163195,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5120]: > Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.328938 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.187f54354c48bade openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:49556->192.168.126.11:17697: read: connection reset by peer Dec 08 19:29:27 crc kubenswrapper[5120]: body: Dec 08 19:29:27 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:26.842358494 +0000 UTC m=+19.514465193,LastTimestamp:2025-12-08 19:29:26.842358494 +0000 UTC m=+19.514465193,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5120]: > Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.333184 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54354c4cab0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49556->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:26.842616591 +0000 UTC m=+19.514723280,LastTimestamp:2025-12-08 19:29:26.842616591 +0000 UTC m=+19.514723280,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.340323 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54354c4e0c96 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:26.842707094 +0000 UTC m=+19.514813783,LastTimestamp:2025-12-08 19:29:26.842707094 +0000 UTC m=+19.514813783,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.345855 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:27 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.187f54354c9e284b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:49584->192.168.126.11:17697: read: connection reset by peer Dec 08 19:29:27 crc kubenswrapper[5120]: body: Dec 08 19:29:27 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:26.847957067 +0000 UTC m=+19.520063716,LastTimestamp:2025-12-08 19:29:26.847957067 +0000 UTC m=+19.520063716,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:27 crc kubenswrapper[5120]: > Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.350394 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54354c9ece96 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49584->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:26.847999638 +0000 UTC m=+19.520106287,LastTimestamp:2025-12-08 19:29:26.847999638 +0000 UTC m=+19.520106287,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.582407 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.723855 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.760442 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.760639 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.761680 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.761799 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.761824 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.762472 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.784116 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.786137 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="510f33ad831ac55ee5a535adcdaa10a25c972c8de2648113389d854671db5e52" exitCode=255 Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.786230 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"510f33ad831ac55ee5a535adcdaa10a25c972c8de2648113389d854671db5e52"} Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.786508 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.787222 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.787286 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.787303 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.787815 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:27 crc kubenswrapper[5120]: I1208 19:29:27.788210 5120 scope.go:117] "RemoveContainer" containerID="510f33ad831ac55ee5a535adcdaa10a25c972c8de2648113389d854671db5e52" Dec 08 19:29:27 crc kubenswrapper[5120]: E1208 19:29:27.795523 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54318528104d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54318528104d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.616649805 +0000 UTC m=+3.288756454,LastTimestamp:2025-12-08 19:29:27.789392365 +0000 UTC m=+20.461499024,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:28 crc kubenswrapper[5120]: E1208 19:29:28.018726 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54319181fa2d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319181fa2d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.823868973 +0000 UTC m=+3.495975622,LastTimestamp:2025-12-08 19:29:28.013471228 +0000 UTC m=+20.685577877,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:28 crc kubenswrapper[5120]: E1208 19:29:28.032393 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f543192313ce4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543192313ce4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.835354852 +0000 UTC m=+3.507461491,LastTimestamp:2025-12-08 19:29:28.025296452 +0000 UTC m=+20.697403111,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:28 crc kubenswrapper[5120]: I1208 19:29:28.583308 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:28 crc kubenswrapper[5120]: I1208 19:29:28.793927 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:28 crc kubenswrapper[5120]: I1208 19:29:28.795913 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f"} Dec 08 19:29:28 crc kubenswrapper[5120]: I1208 19:29:28.796320 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:28 crc kubenswrapper[5120]: I1208 19:29:28.797688 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:28 crc kubenswrapper[5120]: I1208 19:29:28.797747 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:28 crc kubenswrapper[5120]: I1208 19:29:28.797771 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:28 crc kubenswrapper[5120]: E1208 19:29:28.798454 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.584360 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.800249 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.800824 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.802217 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f" exitCode=255 Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.802262 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f"} Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.802313 5120 scope.go:117] "RemoveContainer" containerID="510f33ad831ac55ee5a535adcdaa10a25c972c8de2648113389d854671db5e52" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.802545 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.803312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.803343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.803352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:29 crc kubenswrapper[5120]: E1208 19:29:29.803667 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:29 crc kubenswrapper[5120]: I1208 19:29:29.803892 5120 scope.go:117] "RemoveContainer" containerID="08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f" Dec 08 19:29:29 crc kubenswrapper[5120]: E1208 19:29:29.804055 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:29 crc kubenswrapper[5120]: E1208 19:29:29.809458 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5435fcd0326e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:29.804026478 +0000 UTC m=+22.476133127,LastTimestamp:2025-12-08 19:29:29.804026478 +0000 UTC m=+22.476133127,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:30 crc kubenswrapper[5120]: E1208 19:29:30.203150 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.420578 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.420817 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.421659 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.421706 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.421724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:30 crc kubenswrapper[5120]: E1208 19:29:30.422304 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.440564 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.466438 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.467710 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.467786 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.467815 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.467861 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:30 crc kubenswrapper[5120]: E1208 19:29:30.479108 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:30 crc kubenswrapper[5120]: E1208 19:29:30.524392 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.579348 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.806780 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.808931 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.809610 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.809685 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:30 crc kubenswrapper[5120]: I1208 19:29:30.809711 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:30 crc kubenswrapper[5120]: E1208 19:29:30.810646 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.286358 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.286649 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.287724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.287804 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.287828 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:31 crc kubenswrapper[5120]: E1208 19:29:31.288440 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.292260 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.583453 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.811386 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.812071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.812113 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:31 crc kubenswrapper[5120]: I1208 19:29:31.812126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:31 crc kubenswrapper[5120]: E1208 19:29:31.812532 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:32 crc kubenswrapper[5120]: I1208 19:29:32.587124 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:33 crc kubenswrapper[5120]: I1208 19:29:33.585691 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:33 crc kubenswrapper[5120]: E1208 19:29:33.734127 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:34 crc kubenswrapper[5120]: I1208 19:29:34.582817 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:34 crc kubenswrapper[5120]: I1208 19:29:34.598293 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:34 crc kubenswrapper[5120]: I1208 19:29:34.598680 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:34 crc kubenswrapper[5120]: I1208 19:29:34.599624 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:34 crc kubenswrapper[5120]: I1208 19:29:34.599667 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:34 crc kubenswrapper[5120]: I1208 19:29:34.599680 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:34 crc kubenswrapper[5120]: E1208 19:29:34.599990 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:34 crc kubenswrapper[5120]: I1208 19:29:34.600286 5120 scope.go:117] "RemoveContainer" containerID="08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f" Dec 08 19:29:34 crc kubenswrapper[5120]: E1208 19:29:34.600465 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:34 crc kubenswrapper[5120]: E1208 19:29:34.605000 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5435fcd0326e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5435fcd0326e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:29.804026478 +0000 UTC m=+22.476133127,LastTimestamp:2025-12-08 19:29:34.600435402 +0000 UTC m=+27.272542051,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:35 crc kubenswrapper[5120]: E1208 19:29:35.117731 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:35 crc kubenswrapper[5120]: I1208 19:29:35.580764 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:35 crc kubenswrapper[5120]: E1208 19:29:35.695064 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:36 crc kubenswrapper[5120]: I1208 19:29:36.587257 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:37 crc kubenswrapper[5120]: E1208 19:29:37.209777 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:37 crc kubenswrapper[5120]: I1208 19:29:37.479991 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:37 crc kubenswrapper[5120]: I1208 19:29:37.481121 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:37 crc kubenswrapper[5120]: I1208 19:29:37.481218 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:37 crc kubenswrapper[5120]: I1208 19:29:37.481250 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:37 crc kubenswrapper[5120]: I1208 19:29:37.481306 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:37 crc kubenswrapper[5120]: E1208 19:29:37.492690 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:37 crc kubenswrapper[5120]: I1208 19:29:37.583965 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:37 crc kubenswrapper[5120]: E1208 19:29:37.724282 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:38 crc kubenswrapper[5120]: I1208 19:29:38.585330 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:38 crc kubenswrapper[5120]: I1208 19:29:38.797249 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:38 crc kubenswrapper[5120]: I1208 19:29:38.797504 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:38 crc kubenswrapper[5120]: I1208 19:29:38.798510 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:38 crc kubenswrapper[5120]: I1208 19:29:38.798563 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:38 crc kubenswrapper[5120]: I1208 19:29:38.798577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:38 crc kubenswrapper[5120]: E1208 19:29:38.799015 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:38 crc kubenswrapper[5120]: I1208 19:29:38.799326 5120 scope.go:117] "RemoveContainer" containerID="08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f" Dec 08 19:29:38 crc kubenswrapper[5120]: E1208 19:29:38.799565 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:38 crc kubenswrapper[5120]: E1208 19:29:38.805512 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5435fcd0326e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5435fcd0326e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:29.804026478 +0000 UTC m=+22.476133127,LastTimestamp:2025-12-08 19:29:38.799527916 +0000 UTC m=+31.471634565,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:39 crc kubenswrapper[5120]: I1208 19:29:39.583863 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:40 crc kubenswrapper[5120]: I1208 19:29:40.584536 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:41 crc kubenswrapper[5120]: E1208 19:29:41.495883 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:41 crc kubenswrapper[5120]: I1208 19:29:41.584496 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:42 crc kubenswrapper[5120]: I1208 19:29:42.584566 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:43 crc kubenswrapper[5120]: I1208 19:29:43.585307 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:44 crc kubenswrapper[5120]: E1208 19:29:44.216041 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:44 crc kubenswrapper[5120]: I1208 19:29:44.493758 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:44 crc kubenswrapper[5120]: I1208 19:29:44.495302 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:44 crc kubenswrapper[5120]: I1208 19:29:44.495367 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:44 crc kubenswrapper[5120]: I1208 19:29:44.495388 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:44 crc kubenswrapper[5120]: I1208 19:29:44.495423 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:44 crc kubenswrapper[5120]: E1208 19:29:44.511723 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:44 crc kubenswrapper[5120]: I1208 19:29:44.585449 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:45 crc kubenswrapper[5120]: I1208 19:29:45.584496 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:46 crc kubenswrapper[5120]: I1208 19:29:46.584285 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:47 crc kubenswrapper[5120]: I1208 19:29:47.584276 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:47 crc kubenswrapper[5120]: E1208 19:29:47.725469 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:48 crc kubenswrapper[5120]: I1208 19:29:48.583342 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:49 crc kubenswrapper[5120]: I1208 19:29:49.585483 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:49 crc kubenswrapper[5120]: I1208 19:29:49.659838 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:49 crc kubenswrapper[5120]: I1208 19:29:49.660836 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:49 crc kubenswrapper[5120]: I1208 19:29:49.660872 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:49 crc kubenswrapper[5120]: I1208 19:29:49.660887 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:49 crc kubenswrapper[5120]: E1208 19:29:49.661298 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:49 crc kubenswrapper[5120]: I1208 19:29:49.661632 5120 scope.go:117] "RemoveContainer" containerID="08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f" Dec 08 19:29:49 crc kubenswrapper[5120]: E1208 19:29:49.668942 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54318528104d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54318528104d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.616649805 +0000 UTC m=+3.288756454,LastTimestamp:2025-12-08 19:29:49.662820977 +0000 UTC m=+42.334927646,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:49 crc kubenswrapper[5120]: E1208 19:29:49.958959 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f54319181fa2d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54319181fa2d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.823868973 +0000 UTC m=+3.495975622,LastTimestamp:2025-12-08 19:29:49.952002183 +0000 UTC m=+42.624108832,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:49 crc kubenswrapper[5120]: E1208 19:29:49.969427 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f543192313ce4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543192313ce4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:10.835354852 +0000 UTC m=+3.507461491,LastTimestamp:2025-12-08 19:29:49.967014974 +0000 UTC m=+42.639121623,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:50 crc kubenswrapper[5120]: I1208 19:29:50.582804 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:50 crc kubenswrapper[5120]: I1208 19:29:50.862661 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:50 crc kubenswrapper[5120]: I1208 19:29:50.865004 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d"} Dec 08 19:29:50 crc kubenswrapper[5120]: I1208 19:29:50.865367 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:50 crc kubenswrapper[5120]: I1208 19:29:50.866134 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:50 crc kubenswrapper[5120]: I1208 19:29:50.866288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:50 crc kubenswrapper[5120]: I1208 19:29:50.866414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:50 crc kubenswrapper[5120]: E1208 19:29:50.867069 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:51 crc kubenswrapper[5120]: E1208 19:29:51.222462 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.512902 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.514974 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.515034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.515071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.515106 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:51 crc kubenswrapper[5120]: E1208 19:29:51.530671 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.584217 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.870693 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.871991 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.875231 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d" exitCode=255 Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.875314 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d"} Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.875361 5120 scope.go:117] "RemoveContainer" containerID="08f66c649a1081dc0233b28c95187580b7f2d8290fd93806f92a374c15c0ff2f" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.875682 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.876829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.876971 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.877061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:51 crc kubenswrapper[5120]: E1208 19:29:51.877604 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:51 crc kubenswrapper[5120]: I1208 19:29:51.878033 5120 scope.go:117] "RemoveContainer" containerID="afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d" Dec 08 19:29:51 crc kubenswrapper[5120]: E1208 19:29:51.878808 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:51 crc kubenswrapper[5120]: E1208 19:29:51.883853 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5435fcd0326e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5435fcd0326e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:29.804026478 +0000 UTC m=+22.476133127,LastTimestamp:2025-12-08 19:29:51.87835454 +0000 UTC m=+44.550461199,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:51 crc kubenswrapper[5120]: E1208 19:29:51.923085 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:52 crc kubenswrapper[5120]: I1208 19:29:52.584882 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:52 crc kubenswrapper[5120]: I1208 19:29:52.879269 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:29:53 crc kubenswrapper[5120]: I1208 19:29:53.583541 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:54 crc kubenswrapper[5120]: I1208 19:29:54.586145 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:54 crc kubenswrapper[5120]: I1208 19:29:54.598696 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:54 crc kubenswrapper[5120]: I1208 19:29:54.598910 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:54 crc kubenswrapper[5120]: I1208 19:29:54.599743 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:54 crc kubenswrapper[5120]: I1208 19:29:54.599825 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:54 crc kubenswrapper[5120]: I1208 19:29:54.599851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:54 crc kubenswrapper[5120]: E1208 19:29:54.600547 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:54 crc kubenswrapper[5120]: I1208 19:29:54.601117 5120 scope.go:117] "RemoveContainer" containerID="afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d" Dec 08 19:29:54 crc kubenswrapper[5120]: E1208 19:29:54.601590 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:54 crc kubenswrapper[5120]: E1208 19:29:54.606856 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5435fcd0326e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5435fcd0326e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:29.804026478 +0000 UTC m=+22.476133127,LastTimestamp:2025-12-08 19:29:54.601514907 +0000 UTC m=+47.273621606,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:55 crc kubenswrapper[5120]: I1208 19:29:55.584936 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:55 crc kubenswrapper[5120]: E1208 19:29:55.781840 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:56 crc kubenswrapper[5120]: I1208 19:29:56.585482 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:57 crc kubenswrapper[5120]: E1208 19:29:57.171923 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:57 crc kubenswrapper[5120]: I1208 19:29:57.582762 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:57 crc kubenswrapper[5120]: E1208 19:29:57.726233 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:58 crc kubenswrapper[5120]: E1208 19:29:58.228400 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:58 crc kubenswrapper[5120]: E1208 19:29:58.303442 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:58 crc kubenswrapper[5120]: I1208 19:29:58.531088 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:58 crc kubenswrapper[5120]: I1208 19:29:58.532027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:58 crc kubenswrapper[5120]: I1208 19:29:58.532072 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:58 crc kubenswrapper[5120]: I1208 19:29:58.532084 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:58 crc kubenswrapper[5120]: I1208 19:29:58.532105 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:58 crc kubenswrapper[5120]: E1208 19:29:58.541424 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:58 crc kubenswrapper[5120]: I1208 19:29:58.584658 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:59 crc kubenswrapper[5120]: I1208 19:29:59.584208 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:00 crc kubenswrapper[5120]: I1208 19:30:00.582558 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:00 crc kubenswrapper[5120]: I1208 19:30:00.865508 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:00 crc kubenswrapper[5120]: I1208 19:30:00.865733 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:00 crc kubenswrapper[5120]: I1208 19:30:00.866865 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:00 crc kubenswrapper[5120]: I1208 19:30:00.866922 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:00 crc kubenswrapper[5120]: I1208 19:30:00.866938 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:00 crc kubenswrapper[5120]: E1208 19:30:00.867644 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:00 crc kubenswrapper[5120]: I1208 19:30:00.868214 5120 scope.go:117] "RemoveContainer" containerID="afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d" Dec 08 19:30:00 crc kubenswrapper[5120]: E1208 19:30:00.868837 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:00 crc kubenswrapper[5120]: E1208 19:30:00.876439 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5435fcd0326e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5435fcd0326e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:29.804026478 +0000 UTC m=+22.476133127,LastTimestamp:2025-12-08 19:30:00.868762737 +0000 UTC m=+53.540869436,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:30:01 crc kubenswrapper[5120]: I1208 19:30:01.587125 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:02 crc kubenswrapper[5120]: I1208 19:30:02.583210 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:02 crc kubenswrapper[5120]: I1208 19:30:02.739006 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:30:02 crc kubenswrapper[5120]: I1208 19:30:02.739472 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:02 crc kubenswrapper[5120]: I1208 19:30:02.740392 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:02 crc kubenswrapper[5120]: I1208 19:30:02.740439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:02 crc kubenswrapper[5120]: I1208 19:30:02.740450 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:02 crc kubenswrapper[5120]: E1208 19:30:02.740825 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:03 crc kubenswrapper[5120]: I1208 19:30:03.584603 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:04 crc kubenswrapper[5120]: I1208 19:30:04.587035 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:05 crc kubenswrapper[5120]: E1208 19:30:05.235486 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:30:05 crc kubenswrapper[5120]: I1208 19:30:05.541954 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:05 crc kubenswrapper[5120]: I1208 19:30:05.543308 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:05 crc kubenswrapper[5120]: I1208 19:30:05.543388 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:05 crc kubenswrapper[5120]: I1208 19:30:05.543408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:05 crc kubenswrapper[5120]: I1208 19:30:05.543446 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:30:05 crc kubenswrapper[5120]: E1208 19:30:05.561853 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:30:05 crc kubenswrapper[5120]: I1208 19:30:05.584673 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:06 crc kubenswrapper[5120]: I1208 19:30:06.585288 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:07 crc kubenswrapper[5120]: I1208 19:30:07.586645 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:07 crc kubenswrapper[5120]: E1208 19:30:07.726626 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:08 crc kubenswrapper[5120]: I1208 19:30:08.582147 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:09 crc kubenswrapper[5120]: I1208 19:30:09.584255 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:10 crc kubenswrapper[5120]: I1208 19:30:10.583892 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:11 crc kubenswrapper[5120]: I1208 19:30:11.586105 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:12 crc kubenswrapper[5120]: E1208 19:30:12.242490 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.561976 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.562962 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.563017 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.563041 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.563076 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:30:12 crc kubenswrapper[5120]: E1208 19:30:12.573071 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.583206 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.615340 5120 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-glkss" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.629115 5120 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-glkss" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.659041 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.659956 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.660014 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.660029 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:12 crc kubenswrapper[5120]: E1208 19:30:12.660735 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.661080 5120 scope.go:117] "RemoveContainer" containerID="afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d" Dec 08 19:30:12 crc kubenswrapper[5120]: I1208 19:30:12.676482 5120 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.485255 5120 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.630875 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 19:25:12 +0000 UTC" deadline="2026-01-04 04:05:06.906089514 +0000 UTC" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.630927 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="632h34m53.275165091s" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.934395 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.935496 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.937387 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" exitCode=255 Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.937429 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac"} Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.937468 5120 scope.go:117] "RemoveContainer" containerID="afcfbeaa547813124a48eb229fe52788ed59bc4ed55fcdd1f273b12770809c0d" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.937665 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.938394 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.938465 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.938524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:13 crc kubenswrapper[5120]: E1208 19:30:13.939309 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:13 crc kubenswrapper[5120]: I1208 19:30:13.939761 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:30:13 crc kubenswrapper[5120]: E1208 19:30:13.940082 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:14 crc kubenswrapper[5120]: I1208 19:30:14.598802 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:14 crc kubenswrapper[5120]: I1208 19:30:14.941647 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:30:14 crc kubenswrapper[5120]: I1208 19:30:14.944717 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:14 crc kubenswrapper[5120]: I1208 19:30:14.945400 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:14 crc kubenswrapper[5120]: I1208 19:30:14.945473 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:14 crc kubenswrapper[5120]: I1208 19:30:14.945494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:14 crc kubenswrapper[5120]: E1208 19:30:14.946311 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:14 crc kubenswrapper[5120]: I1208 19:30:14.946718 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:30:14 crc kubenswrapper[5120]: E1208 19:30:14.947091 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:17 crc kubenswrapper[5120]: E1208 19:30:17.727865 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.574093 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.575000 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.575034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.575046 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.575201 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.585558 5120 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.585830 5120 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.585860 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.589913 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.589965 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.589977 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.589992 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.590002 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:19Z","lastTransitionTime":"2025-12-08T19:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.603057 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.609566 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.609603 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.609614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.609627 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.609638 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:19Z","lastTransitionTime":"2025-12-08T19:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.618592 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.624288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.624310 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.624319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.624330 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.624339 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:19Z","lastTransitionTime":"2025-12-08T19:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.633441 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.651109 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.651151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.651186 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.651203 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:19 crc kubenswrapper[5120]: I1208 19:30:19.651217 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:19Z","lastTransitionTime":"2025-12-08T19:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.665838 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.666207 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.666258 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.766899 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.867244 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5120]: E1208 19:30:19.967376 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.067690 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.167999 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.268426 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.369430 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.470533 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.570992 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.671292 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.771852 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: I1208 19:30:20.865741 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:20 crc kubenswrapper[5120]: I1208 19:30:20.865968 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:20 crc kubenswrapper[5120]: I1208 19:30:20.866750 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5120]: I1208 19:30:20.866798 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5120]: I1208 19:30:20.866812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.867403 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:20 crc kubenswrapper[5120]: I1208 19:30:20.867695 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.867935 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.871926 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:20 crc kubenswrapper[5120]: E1208 19:30:20.972916 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.073455 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.173890 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.274592 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.375379 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.476329 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.577261 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.678381 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: E1208 19:30:21.778513 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.860387 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.881092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.881408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.881605 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.881811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.881999 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.889414 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.905945 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.983895 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.983930 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.983939 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.983952 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5120]: I1208 19:30:21.983963 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.006930 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.085607 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.085683 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.085703 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.085734 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.085746 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.106553 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.188330 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.188400 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.188431 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.188460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.188482 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.206133 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.291124 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.291227 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.291242 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.291295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.291311 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.393849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.393927 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.394001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.394030 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.394049 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.496352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.496397 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.496409 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.496425 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.496438 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.598313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.598780 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.598973 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.599213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.599434 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.605490 5120 apiserver.go:52] "Watching apiserver" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.615930 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.626583 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-88294","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc","openshift-image-registry/node-ca-tl7xr","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-t6dx4","openshift-multus/network-metrics-daemon-hvzp8","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-5j87q","openshift-multus/multus-additional-cni-plugins-d7p4j","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-node-ccb8r","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4"] Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.627851 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.629366 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.629609 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.630703 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.631449 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.634500 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.634583 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.634786 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.634986 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.637459 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.638589 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.638818 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.638946 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.641205 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.641289 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.641489 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.641481 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.644050 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.644481 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.647021 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.647078 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.647465 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.647679 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.647932 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.649985 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.663144 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.666160 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.666264 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.666354 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.669617 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.669712 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.670143 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.672664 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.675269 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.675647 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.675829 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.676634 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.680036 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.684105 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.684616 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.684677 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.685124 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.685447 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.686118 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.687475 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.689965 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.690785 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.691151 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.691509 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.692111 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.694832 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.695534 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.695790 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.696019 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.696362 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.696617 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.699755 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.700372 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.701387 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.703969 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.704002 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.704016 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.704038 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.704055 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.704344 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.704473 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.704783 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.715315 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.727853 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743293 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743335 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d275b9e-8290-4f3d-8234-69302878d7d2-serviceca\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743361 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-system-cni-dir\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743425 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-cnibin\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743497 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-multus-certs\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743529 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-config\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743555 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-cni-binary-copy\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743590 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-cni-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743612 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-os-release\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7l9v\" (UniqueName: \"kubernetes.io/projected/1a06e739-3597-44df-894c-328bdbcf0af2-kube-api-access-b7l9v\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743669 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcrf\" (UniqueName: \"kubernetes.io/projected/2d275b9e-8290-4f3d-8234-69302878d7d2-kube-api-access-glcrf\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743692 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743711 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-hostroot\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743745 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a06e739-3597-44df-894c-328bdbcf0af2-ovn-node-metrics-cert\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743783 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743809 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743830 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-conf-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743851 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-log-socket\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743872 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743892 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-netns\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743940 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-env-overrides\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743975 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmd7\" (UniqueName: \"kubernetes.io/projected/9908c1e4-2e64-4aec-99cc-1ff468d1a145-kube-api-access-8dmd7\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.743997 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744030 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjbh5\" (UniqueName: \"kubernetes.io/projected/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-kube-api-access-sjbh5\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744051 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-ovn\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744072 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-netd\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744093 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744116 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744142 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-socket-dir-parent\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744179 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-daemon-config\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744208 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-slash\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744249 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744303 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-etc-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744327 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-cnibin\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-system-cni-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744370 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8ft\" (UniqueName: \"kubernetes.io/projected/0b722a01-9c2b-4e79-a301-c728aa5a90a1-kube-api-access-dr8ft\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744392 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-proxy-tls\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744413 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-systemd-units\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744437 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-os-release\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744458 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9xh\" (UniqueName: \"kubernetes.io/projected/72f36857-3aeb-4132-986f-d12fc2df547c-kube-api-access-8v9xh\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744480 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-netns\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744503 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-mcd-auth-proxy-config\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-node-log\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744582 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744605 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744631 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-k8s-cni-cncf-io\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744651 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-rootfs\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744673 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-ovn-kubernetes\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744693 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-script-lib\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744716 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744736 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b722a01-9c2b-4e79-a301-c728aa5a90a1-cni-binary-copy\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744757 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-cni-bin\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744777 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-etc-kubernetes\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744820 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744840 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744866 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744890 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6kc\" (UniqueName: \"kubernetes.io/projected/35fbb2df-5282-4e19-b92d-5b7ffd03f707-kube-api-access-dz6kc\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744920 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-kubelet\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744940 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-kubelet\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744961 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.744981 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d275b9e-8290-4f3d-8234-69302878d7d2-host\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745001 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745022 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-cni-multus\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-var-lib-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745061 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-bin\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745080 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9908c1e4-2e64-4aec-99cc-1ff468d1a145-hosts-file\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745099 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9908c1e4-2e64-4aec-99cc-1ff468d1a145-tmp-dir\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745123 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745184 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-systemd\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.745264 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.745446 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.745592 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:23.245553263 +0000 UTC m=+75.917659932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.746223 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.746278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:23.246265376 +0000 UTC m=+75.918372035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.746757 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.748337 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.748756 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.753137 5120 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.760161 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.760468 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.760492 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.760505 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.760606 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:23.260559336 +0000 UTC m=+75.932665985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.760858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.766065 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.766964 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.767004 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.767044 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.767131 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:23.267105362 +0000 UTC m=+75.939212031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.774345 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.776720 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.777138 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.777421 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.786977 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a6b8a-c993-412e-afc5-96a43076734d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.791029 5120 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.799218 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.805540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.805584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.805598 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.805615 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.805627 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.810270 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.818502 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-88294" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9908c1e4-2e64-4aec-99cc-1ff468d1a145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dmd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-88294\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.835242 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a06e739-3597-44df-894c-328bdbcf0af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ccb8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.843314 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f34113-e3de-4681-aa3e-c78a89bec2bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ftpb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846305 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846346 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846395 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846422 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846447 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846470 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846494 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846520 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846543 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846566 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846589 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846635 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846657 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846679 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846705 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846728 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846782 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846806 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846829 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846853 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846897 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846922 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846949 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846975 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.846999 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847025 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847052 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847074 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847101 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847128 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847152 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847196 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847226 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847250 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847277 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847300 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847323 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847349 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847372 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847395 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847418 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847445 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847474 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847498 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847521 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847542 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847566 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847590 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847613 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847636 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847661 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847686 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847708 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847731 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847756 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847780 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847804 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847829 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847855 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847879 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847863 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847907 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847931 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847956 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.847987 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848012 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848036 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848036 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848063 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848217 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848258 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848298 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848552 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.848772 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849298 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849398 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849470 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849708 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849766 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849813 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849844 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.849890 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19365c54-8e61-4422-b662-f9c31e5c1f55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://828d41712fd94eedcd50f38c38d43fd474f30a968c24771c2db5da65f26e2b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850245 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850343 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850544 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850638 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850871 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850858 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850900 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851071 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.850951 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851194 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851210 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851224 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851258 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851285 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851312 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851311 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851366 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851399 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851424 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851449 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851477 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851505 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851533 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851563 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851591 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851615 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851646 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851673 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851697 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851722 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851752 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851781 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851806 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851830 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851857 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851884 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851911 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851938 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851965 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.851995 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.852021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.852531 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.852661 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.852693 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.852691 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.852727 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.853388 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.853652 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.853444 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.853759 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.853797 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.853897 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.853928 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.854033 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.854074 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.854473 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.854479 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.854841 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.855148 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.855274 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.855368 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.855635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.855916 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.855952 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856140 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856462 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856608 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856611 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856763 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856783 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856850 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856861 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.856951 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.857015 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.857371 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.857723 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.858019 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.858107 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.858299 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.858822 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.859086 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.859132 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.858152 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.859390 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.859854 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.859859 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.859915 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.859993 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860080 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860064 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860221 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.852698 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860500 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860565 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860650 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860688 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860728 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860765 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860803 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860836 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860870 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860912 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860945 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860987 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861042 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861091 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861142 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861318 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861386 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861433 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861490 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861555 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861672 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861741 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861795 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861842 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861899 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861960 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862010 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862129 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862551 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862648 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862751 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862843 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862937 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863037 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863128 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863246 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863450 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863558 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863659 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863764 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863860 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863956 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864102 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864268 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864400 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864516 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864611 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864719 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864853 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864944 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865030 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865149 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865335 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865441 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865532 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865605 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865672 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865742 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865833 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865919 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865995 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866057 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866135 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866236 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866320 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866386 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866449 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866529 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866599 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866671 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866735 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866819 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866931 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867093 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867261 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867365 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867462 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867572 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867670 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867776 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867876 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867982 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868075 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868226 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868339 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868448 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868537 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868605 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868679 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868743 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868813 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868881 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868950 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860341 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860678 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860696 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.860824 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861542 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861678 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861705 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.861730 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862348 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862476 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862628 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862658 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862704 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862714 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.862739 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863047 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863049 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863149 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863256 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863305 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863410 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863617 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863690 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.863733 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864017 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864057 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864103 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864100 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864228 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864288 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864470 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864562 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864588 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864631 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864719 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864989 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.864583 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865030 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865226 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865284 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865802 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.865942 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866377 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866445 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.866850 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867017 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867053 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867421 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.867489 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868078 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868092 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868112 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868109 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868146 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.869533 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868321 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868463 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868628 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.868768 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.869034 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.869953 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.869983 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870007 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870032 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870058 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870084 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870145 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870196 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870222 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870277 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870303 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870328 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870454 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-socket-dir-parent\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870483 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-daemon-config\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870511 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-slash\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870517 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870576 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-slash\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870725 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870752 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.870955 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-etc-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871010 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-cnibin\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871025 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-system-cni-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871088 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871093 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8ft\" (UniqueName: \"kubernetes.io/projected/0b722a01-9c2b-4e79-a301-c728aa5a90a1-kube-api-access-dr8ft\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871280 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871307 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-proxy-tls\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871400 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871406 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871411 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871897 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.871979 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872053 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872070 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872214 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872289 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872755 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872857 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872193 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-systemd-units\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872924 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-etc-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872864 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-systemd-units\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872954 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.872969 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-cnibin\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873048 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-system-cni-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873380 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873469 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-os-release\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873485 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8v9xh\" (UniqueName: \"kubernetes.io/projected/72f36857-3aeb-4132-986f-d12fc2df547c-kube-api-access-8v9xh\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873549 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-netns\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-mcd-auth-proxy-config\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873606 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-node-log\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873648 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-netns\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873738 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-os-release\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.873787 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874022 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874306 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874529 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-k8s-cni-cncf-io\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874757 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874789 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874803 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874976 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874565 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-node-log\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875205 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875436 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875498 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875291 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875684 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875791 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876111 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876135 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876399 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876534 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876785 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876837 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-k8s-cni-cncf-io\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875999 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3e3c3-a34d-4095-9d82-4a51911f455c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://559e47716f6751afd73ab762aff2e0b80c0bf1d6a390a03122d9e9a7d55c06ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9b3b16166a06cb8d2dbfcb59f281d684c99fddf8dd2dcc530a0b297cfe79907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc7b415ceb349ee0dda497067d9c98ce687d91cbe06240e8064168aaf47e84c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.876647 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:23.376595315 +0000 UTC m=+76.048701964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876067 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-rootfs\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.875962 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-rootfs\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.877159 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.877302 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.877674 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.877836 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.878214 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-ovn-kubernetes\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880743 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-script-lib\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880774 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880794 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b722a01-9c2b-4e79-a301-c728aa5a90a1-cni-binary-copy\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880812 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-cni-bin\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880856 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-etc-kubernetes\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880895 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880913 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6kc\" (UniqueName: \"kubernetes.io/projected/35fbb2df-5282-4e19-b92d-5b7ffd03f707-kube-api-access-dz6kc\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880930 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-kubelet\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880946 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-kubelet\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880964 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880981 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d275b9e-8290-4f3d-8234-69302878d7d2-host\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880997 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881012 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-cni-multus\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881031 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-var-lib-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881047 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-bin\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881064 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9908c1e4-2e64-4aec-99cc-1ff468d1a145-hosts-file\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881079 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9908c1e4-2e64-4aec-99cc-1ff468d1a145-tmp-dir\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881098 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-systemd\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881117 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881176 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881198 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d275b9e-8290-4f3d-8234-69302878d7d2-serviceca\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881218 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-system-cni-dir\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881236 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-cnibin\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881253 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-multus-certs\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-config\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881474 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-daemon-config\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.882588 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d275b9e-8290-4f3d-8234-69302878d7d2-host\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.878285 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.878299 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876727 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876848 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.878617 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876667 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880255 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880316 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880535 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880707 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880986 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881007 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881508 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881536 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881908 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.874148 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881949 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.882177 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.882201 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.882975 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883000 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883091 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-cni-bin\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883267 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883345 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-etc-kubernetes\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.879740 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-socket-dir-parent\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883394 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-kubelet\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-kubelet\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.883481 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883519 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.883579 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs podName:35fbb2df-5282-4e19-b92d-5b7ffd03f707 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:23.383553294 +0000 UTC m=+76.055659973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs") pod "network-metrics-daemon-hvzp8" (UID: "35fbb2df-5282-4e19-b92d-5b7ffd03f707") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.883735 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.876254 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-mcd-auth-proxy-config\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884039 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-systemd\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884130 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-system-cni-dir\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884243 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884310 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884369 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-var-lib-cni-multus\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884397 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-script-lib\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884418 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-var-lib-openvswitch\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884488 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9908c1e4-2e64-4aec-99cc-1ff468d1a145-hosts-file\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-bin\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884553 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b722a01-9c2b-4e79-a301-c728aa5a90a1-cni-binary-copy\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884571 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-host-run-multus-certs\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.880054 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-ovn-kubernetes\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.884654 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-cnibin\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.885052 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9908c1e4-2e64-4aec-99cc-1ff468d1a145-tmp-dir\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.885276 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-proxy-tls\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.885697 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-config\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.881292 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-cni-binary-copy\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.885942 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-cni-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.885963 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-os-release\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.885980 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b7l9v\" (UniqueName: \"kubernetes.io/projected/1a06e739-3597-44df-894c-328bdbcf0af2-kube-api-access-b7l9v\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886099 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886123 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-glcrf\" (UniqueName: \"kubernetes.io/projected/2d275b9e-8290-4f3d-8234-69302878d7d2-kube-api-access-glcrf\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886143 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886173 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-hostroot\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886190 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a06e739-3597-44df-894c-328bdbcf0af2-ovn-node-metrics-cert\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886209 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886226 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-conf-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-log-socket\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886259 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886296 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-netns\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886312 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-env-overrides\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886329 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmd7\" (UniqueName: \"kubernetes.io/projected/9908c1e4-2e64-4aec-99cc-1ff468d1a145-kube-api-access-8dmd7\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886350 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45hg5\" (UniqueName: \"kubernetes.io/projected/39f34113-e3de-4681-aa3e-c78a89bec2bf-kube-api-access-45hg5\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.885241 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d275b9e-8290-4f3d-8234-69302878d7d2-serviceca\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886434 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-cni-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886552 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-os-release\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886373 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjbh5\" (UniqueName: \"kubernetes.io/projected/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-kube-api-access-sjbh5\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886599 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-ovn\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886644 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-netd\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886755 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886766 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886729 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-cni-binary-copy\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886798 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-netd\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887044 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-ovn\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887111 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-multus-conf-dir\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887209 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-log-socket\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887601 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/72f36857-3aeb-4132-986f-d12fc2df547c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.886776 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887645 5120 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887656 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887669 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887671 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0b722a01-9c2b-4e79-a301-c728aa5a90a1-hostroot\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887679 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887694 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887727 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887748 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887769 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887787 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887807 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887826 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887845 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887873 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887896 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887923 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887950 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887978 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888004 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888029 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888031 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-env-overrides\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.887700 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-netns\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888133 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888206 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888238 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888267 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888295 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888203 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/72f36857-3aeb-4132-986f-d12fc2df547c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888321 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888349 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888375 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888406 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888432 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888456 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888480 5120 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888506 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888529 5120 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888554 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888579 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888604 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888630 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888656 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888679 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888705 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888732 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888759 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888785 5120 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888811 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888838 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888862 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888889 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888913 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888938 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888965 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.888993 5120 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889018 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889044 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889070 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889089 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889109 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889127 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889223 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889244 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889262 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889282 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889299 5120 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889316 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889334 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889351 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889372 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889395 5120 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889420 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889447 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889468 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889486 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889503 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889525 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889543 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889560 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889577 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889596 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889613 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889633 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889651 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889668 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889687 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889705 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889722 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889739 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889756 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889772 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889789 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889807 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889825 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889848 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889866 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889883 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889902 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889919 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889936 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889953 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889973 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.889993 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890009 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890029 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890047 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890064 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890085 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890105 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890126 5120 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890150 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890204 5120 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890008 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a06e739-3597-44df-894c-328bdbcf0af2-ovn-node-metrics-cert\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890226 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890272 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890288 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890300 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890313 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890324 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890335 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890333 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890346 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890402 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890433 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890464 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890486 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890596 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890620 5120 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890638 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890818 5120 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890842 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890862 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890879 5120 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890897 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890916 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890933 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890950 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890970 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.890988 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891006 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891027 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891044 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891062 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891080 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891100 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891117 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891134 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891151 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891215 5120 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891232 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891249 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891266 5120 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891283 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891301 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891318 5120 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891337 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891356 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891402 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891426 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891452 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891483 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891507 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891527 5120 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891549 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891570 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891589 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891609 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891630 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891649 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891667 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891687 5120 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891706 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891725 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891747 5120 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891766 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891785 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891805 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891822 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891839 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891858 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891876 5120 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891892 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891908 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891928 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891945 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891963 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891981 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891998 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892015 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.891999 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892032 5120 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892050 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892068 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892086 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892105 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892123 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.892140 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.901569 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.902369 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.902390 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8ft\" (UniqueName: \"kubernetes.io/projected/0b722a01-9c2b-4e79-a301-c728aa5a90a1-kube-api-access-dr8ft\") pod \"multus-t6dx4\" (UID: \"0b722a01-9c2b-4e79-a301-c728aa5a90a1\") " pod="openshift-multus/multus-t6dx4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.902673 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.903893 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.905522 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6kc\" (UniqueName: \"kubernetes.io/projected/35fbb2df-5282-4e19-b92d-5b7ffd03f707-kube-api-access-dz6kc\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.905838 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.906319 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.906635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.906796 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907212 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907285 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907299 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907376 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907614 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907318 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.907944 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.908585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v9xh\" (UniqueName: \"kubernetes.io/projected/72f36857-3aeb-4132-986f-d12fc2df547c-kube-api-access-8v9xh\") pod \"multus-additional-cni-plugins-d7p4j\" (UID: \"72f36857-3aeb-4132-986f-d12fc2df547c\") " pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.908431 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjbh5\" (UniqueName: \"kubernetes.io/projected/2fab2759-7b9c-43f9-a2b0-5e481a7f0cae-kube-api-access-sjbh5\") pod \"machine-config-daemon-5j87q\" (UID: \"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\") " pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.909055 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7l9v\" (UniqueName: \"kubernetes.io/projected/1a06e739-3597-44df-894c-328bdbcf0af2-kube-api-access-b7l9v\") pod \"ovnkube-node-ccb8r\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.910270 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.910518 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.910581 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.910728 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.911054 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-glcrf\" (UniqueName: \"kubernetes.io/projected/2d275b9e-8290-4f3d-8234-69302878d7d2-kube-api-access-glcrf\") pod \"node-ca-tl7xr\" (UID: \"2d275b9e-8290-4f3d-8234-69302878d7d2\") " pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.912430 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmd7\" (UniqueName: \"kubernetes.io/projected/9908c1e4-2e64-4aec-99cc-1ff468d1a145-kube-api-access-8dmd7\") pod \"node-resolver-88294\" (UID: \"9908c1e4-2e64-4aec-99cc-1ff468d1a145\") " pod="openshift-dns/node-resolver-88294" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.913348 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.913363 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.913822 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.913908 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.914366 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.914399 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.914449 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.914482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.914490 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.915139 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.915761 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.918584 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.921375 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.927244 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.936366 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35fbb2df-5282-4e19-b92d-5b7ffd03f707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hvzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.945436 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.946482 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f36857-3aeb-4132-986f-d12fc2df547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d7p4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.947068 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.958688 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.966472 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdcac60-52ed-4b5c-bab2-539a0764add4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.968288 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:22 crc kubenswrapper[5120]: W1208 19:30:22.970783 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-f6596ef6c103634237f7ed5095045895c4db35b4979096e834eb0545c875670a WatchSource:0}: Error finding container f6596ef6c103634237f7ed5095045895c4db35b4979096e834eb0545c875670a: Status 404 returned error can't find the container with id f6596ef6c103634237f7ed5095045895c4db35b4979096e834eb0545c875670a Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.973565 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:22 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:22 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 19:30:22 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Dec 08 19:30:22 crc kubenswrapper[5120]: else Dec 08 19:30:22 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 19:30:22 crc kubenswrapper[5120]: exit 1 Dec 08 19:30:22 crc kubenswrapper[5120]: fi Dec 08 19:30:22 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 19:30:22 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.974795 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.978581 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57c7e94e-3b5e-467b-83ad-227b41850996\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"319504 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:13.320266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2226956868/tls.crt::/tmp/serving-cert-2226956868/tls.key\\\\\\\"\\\\nI1208 19:30:13.647881 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.650378 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.650401 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.650429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.650436 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.655911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 19:30:13.655943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.655956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.655958 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.655961 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 19:30:13.655985 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 19:30:13.658085 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1208 19:30:13.658139 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF1208 19:30:13.658185 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: W1208 19:30:22.980768 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-d75a6089f40e10c0322333e18db0343075d62d64d522cd0bcc51243f6059de33 WatchSource:0}: Error finding container d75a6089f40e10c0322333e18db0343075d62d64d522cd0bcc51243f6059de33: Status 404 returned error can't find the container with id d75a6089f40e10c0322333e18db0343075d62d64d522cd0bcc51243f6059de33 Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.983149 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:22 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:22 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:22 crc kubenswrapper[5120]: source "/env/_master" Dec 08 19:30:22 crc kubenswrapper[5120]: set +o allexport Dec 08 19:30:22 crc kubenswrapper[5120]: fi Dec 08 19:30:22 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 19:30:22 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 19:30:22 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Dec 08 19:30:22 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 19:30:22 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 19:30:22 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 19:30:22 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:22 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 19:30:22 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Dec 08 19:30:22 crc kubenswrapper[5120]: --webhook-port=9743 \ Dec 08 19:30:22 crc kubenswrapper[5120]: ${ho_enable} \ Dec 08 19:30:22 crc kubenswrapper[5120]: --enable-interconnect \ Dec 08 19:30:22 crc kubenswrapper[5120]: --disable-approver \ Dec 08 19:30:22 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 19:30:22 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Dec 08 19:30:22 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 19:30:22 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Dec 08 19:30:22 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.985991 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:22 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:22 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:22 crc kubenswrapper[5120]: source "/env/_master" Dec 08 19:30:22 crc kubenswrapper[5120]: set +o allexport Dec 08 19:30:22 crc kubenswrapper[5120]: fi Dec 08 19:30:22 crc kubenswrapper[5120]: Dec 08 19:30:22 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 19:30:22 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:22 crc kubenswrapper[5120]: --disable-webhook \ Dec 08 19:30:22 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 19:30:22 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Dec 08 19:30:22 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.986823 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: E1208 19:30:22.987112 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.989102 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993316 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993382 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993428 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45hg5\" (UniqueName: \"kubernetes.io/projected/39f34113-e3de-4681-aa3e-c78a89bec2bf-kube-api-access-45hg5\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993583 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993737 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993760 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993799 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993877 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.993936 5120 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994153 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994272 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994311 5120 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994331 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994363 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994381 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994395 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994399 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994327 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994433 5120 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994497 5120 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994513 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994527 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994541 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994554 5120 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994570 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994584 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994593 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994602 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994611 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994619 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994628 5120 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994637 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994646 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994655 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994664 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.994676 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.995379 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tl7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d275b9e-8290-4f3d-8234-69302878d7d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tl7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5120]: I1208 19:30:22.997209 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.001654 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t6dx4" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.004434 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5j87q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5120]: W1208 19:30:23.008919 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-aac91aeb113bc917e8c0e3e8f8cec1e683c41946fd237e5ccda4c265f11ad455 WatchSource:0}: Error finding container aac91aeb113bc917e8c0e3e8f8cec1e683c41946fd237e5ccda4c265f11ad455: Status 404 returned error can't find the container with id aac91aeb113bc917e8c0e3e8f8cec1e683c41946fd237e5ccda4c265f11ad455 Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.009639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.009694 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.009708 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.009729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.009742 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.010643 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45hg5\" (UniqueName: \"kubernetes.io/projected/39f34113-e3de-4681-aa3e-c78a89bec2bf-kube-api-access-45hg5\") pod \"ovnkube-control-plane-57b78d8988-ftpb4\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.011510 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.012895 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.013031 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-88294" Dec 08 19:30:23 crc kubenswrapper[5120]: W1208 19:30:23.016856 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b722a01_9c2b_4e79_a301_c728aa5a90a1.slice/crio-788f849118005b5c70fdc08c15e370ba3720362a9122d71061ff3e2c4b323a07 WatchSource:0}: Error finding container 788f849118005b5c70fdc08c15e370ba3720362a9122d71061ff3e2c4b323a07: Status 404 returned error can't find the container with id 788f849118005b5c70fdc08c15e370ba3720362a9122d71061ff3e2c4b323a07 Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.020297 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 19:30:23 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 19:30:23 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr8ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-t6dx4_openshift-multus(0b722a01-9c2b-4e79-a301-c728aa5a90a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.021458 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-t6dx4" podUID="0b722a01-9c2b-4e79-a301-c728aa5a90a1" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.024475 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tl7xr" Dec 08 19:30:23 crc kubenswrapper[5120]: W1208 19:30:23.024966 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9908c1e4_2e64_4aec_99cc_1ff468d1a145.slice/crio-6442cabb44bf7d827bc269212ca20d37c5818b50524e183beb934fb9f9fbe5ba WatchSource:0}: Error finding container 6442cabb44bf7d827bc269212ca20d37c5818b50524e183beb934fb9f9fbe5ba: Status 404 returned error can't find the container with id 6442cabb44bf7d827bc269212ca20d37c5818b50524e183beb934fb9f9fbe5ba Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.028015 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:23 crc kubenswrapper[5120]: set -uo pipefail Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:23 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:23 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:23 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:23 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:23 crc kubenswrapper[5120]: exit 1 Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: while true; do Dec 08 19:30:23 crc kubenswrapper[5120]: declare -A svc_ips Dec 08 19:30:23 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Dec 08 19:30:23 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:23 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:23 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:23 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:23 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:23 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:23 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:23 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:23 crc kubenswrapper[5120]: for i in ${!cmds[*]} Dec 08 19:30:23 crc kubenswrapper[5120]: do Dec 08 19:30:23 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:23 crc kubenswrapper[5120]: break Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:23 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:23 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:23 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:23 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:23 crc kubenswrapper[5120]: continue Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Append resolver entries for services Dec 08 19:30:23 crc kubenswrapper[5120]: rc=0 Dec 08 19:30:23 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:23 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:23 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:23 crc kubenswrapper[5120]: continue Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:23 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:23 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:23 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:23 crc kubenswrapper[5120]: unset svc_ips Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dmd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-88294_openshift-dns(9908c1e4-2e64-4aec-99cc-1ff468d1a145): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.029151 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-88294" podUID="9908c1e4-2e64-4aec-99cc-1ff468d1a145" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.033824 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.040348 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:23 crc kubenswrapper[5120]: while [ true ]; Dec 08 19:30:23 crc kubenswrapper[5120]: do Dec 08 19:30:23 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:23 crc kubenswrapper[5120]: echo $f Dec 08 19:30:23 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:23 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:23 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:23 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:23 crc kubenswrapper[5120]: else Dec 08 19:30:23 crc kubenswrapper[5120]: mkdir $reg_dir_path Dec 08 19:30:23 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:23 crc kubenswrapper[5120]: echo $d Dec 08 19:30:23 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:23 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:23 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait ${!} Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glcrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tl7xr_openshift-image-registry(2d275b9e-8290-4f3d-8234-69302878d7d2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.041109 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.041406 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tl7xr" podUID="2d275b9e-8290-4f3d-8234-69302878d7d2" Dec 08 19:30:23 crc kubenswrapper[5120]: W1208 19:30:23.047055 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fab2759_7b9c_43f9_a2b0_5e481a7f0cae.slice/crio-21ae76ff49af3c94124e94ebfc5a40647926e7cefd430fa08f9a46aa89644954 WatchSource:0}: Error finding container 21ae76ff49af3c94124e94ebfc5a40647926e7cefd430fa08f9a46aa89644954: Status 404 returned error can't find the container with id 21ae76ff49af3c94124e94ebfc5a40647926e7cefd430fa08f9a46aa89644954 Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.051726 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjbh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5j87q_openshift-machine-config-operator(2fab2759-7b9c-43f9-a2b0-5e481a7f0cae): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.053961 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8v9xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-d7p4j_openshift-multus(72f36857-3aeb-4132-986f-d12fc2df547c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.054499 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjbh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5j87q_openshift-machine-config-operator(2fab2759-7b9c-43f9-a2b0-5e481a7f0cae): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.055416 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" podUID="72f36857-3aeb-4132-986f-d12fc2df547c" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.056116 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.074980 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.084755 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:30:23 crc kubenswrapper[5120]: W1208 19:30:23.085250 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a06e739_3597_44df_894c_328bdbcf0af2.slice/crio-a40383eaee87590ffd7736225b0e7f83f732c978107c5357d75573b319d9f93a WatchSource:0}: Error finding container a40383eaee87590ffd7736225b0e7f83f732c978107c5357d75573b319d9f93a: Status 404 returned error can't find the container with id a40383eaee87590ffd7736225b0e7f83f732c978107c5357d75573b319d9f93a Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.096327 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:23 crc kubenswrapper[5120]: apiVersion: v1 Dec 08 19:30:23 crc kubenswrapper[5120]: clusters: Dec 08 19:30:23 crc kubenswrapper[5120]: - cluster: Dec 08 19:30:23 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:23 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Dec 08 19:30:23 crc kubenswrapper[5120]: name: default-cluster Dec 08 19:30:23 crc kubenswrapper[5120]: contexts: Dec 08 19:30:23 crc kubenswrapper[5120]: - context: Dec 08 19:30:23 crc kubenswrapper[5120]: cluster: default-cluster Dec 08 19:30:23 crc kubenswrapper[5120]: namespace: default Dec 08 19:30:23 crc kubenswrapper[5120]: user: default-auth Dec 08 19:30:23 crc kubenswrapper[5120]: name: default-context Dec 08 19:30:23 crc kubenswrapper[5120]: current-context: default-context Dec 08 19:30:23 crc kubenswrapper[5120]: kind: Config Dec 08 19:30:23 crc kubenswrapper[5120]: preferences: {} Dec 08 19:30:23 crc kubenswrapper[5120]: users: Dec 08 19:30:23 crc kubenswrapper[5120]: - name: default-auth Dec 08 19:30:23 crc kubenswrapper[5120]: user: Dec 08 19:30:23 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:23 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:23 crc kubenswrapper[5120]: EOF Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7l9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-ccb8r_openshift-ovn-kubernetes(1a06e739-3597-44df-894c-328bdbcf0af2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.097626 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.122864 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.122893 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.122904 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.122920 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.122930 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: W1208 19:30:23.124738 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39f34113_e3de_4681_aa3e_c78a89bec2bf.slice/crio-499e314a660c04ddb8f2c87d4f586671f0f3470ea4971cdb1a293eaac193d7ce WatchSource:0}: Error finding container 499e314a660c04ddb8f2c87d4f586671f0f3470ea4971cdb1a293eaac193d7ce: Status 404 returned error can't find the container with id 499e314a660c04ddb8f2c87d4f586671f0f3470ea4971cdb1a293eaac193d7ce Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.126771 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:23 crc kubenswrapper[5120]: set -euo pipefail Dec 08 19:30:23 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 19:30:23 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 19:30:23 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Dec 08 19:30:23 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 19:30:23 crc kubenswrapper[5120]: TS=$(date +%s) Dec 08 19:30:23 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 19:30:23 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: log_missing_certs(){ Dec 08 19:30:23 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 19:30:23 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 19:30:23 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 19:30:23 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: } Dec 08 19:30:23 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 19:30:23 crc kubenswrapper[5120]: log_missing_certs Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 5 Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 19:30:23 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Dec 08 19:30:23 crc kubenswrapper[5120]: --logtostderr \ Dec 08 19:30:23 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Dec 08 19:30:23 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 19:30:23 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Dec 08 19:30:23 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Dec 08 19:30:23 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45hg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ftpb4_openshift-ovn-kubernetes(39f34113-e3de-4681-aa3e-c78a89bec2bf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.128869 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: source "/env/_master" Dec 08 19:30:23 crc kubenswrapper[5120]: set +o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Dec 08 19:30:23 crc kubenswrapper[5120]: # will rollout control plane pods as well Dec 08 19:30:23 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: route_advertisements_enable_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 19:30:23 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Dec 08 19:30:23 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Dec 08 19:30:23 crc kubenswrapper[5120]: else Dec 08 19:30:23 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 19:30:23 crc kubenswrapper[5120]: exit 1 Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 19:30:23 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-interconnect \ Dec 08 19:30:23 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 19:30:23 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --metrics-enable-pprof \ Dec 08 19:30:23 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-ip=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-qos=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-service=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-multicast \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45hg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ftpb4_openshift-ovn-kubernetes(39f34113-e3de-4681-aa3e-c78a89bec2bf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.130109 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.225994 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.226049 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.226068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.226092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.226116 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.298467 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.298626 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.298761 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.298718424 +0000 UTC m=+76.970825103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.299023 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.299115 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.299207 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299257 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299286 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299303 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299354 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299450 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299487 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299509 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299380 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.299357604 +0000 UTC m=+76.971464263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299590 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.299573221 +0000 UTC m=+76.971679880 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.299652 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.299610842 +0000 UTC m=+76.971717491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.328314 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.328383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.328409 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.328441 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.328462 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.400920 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.400886145 +0000 UTC m=+77.072992834 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.400755 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.401367 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.401468 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs podName:35fbb2df-5282-4e19-b92d-5b7ffd03f707 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.401447293 +0000 UTC m=+77.073553982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs") pod "network-metrics-daemon-hvzp8" (UID: "35fbb2df-5282-4e19-b92d-5b7ffd03f707") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.401510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.431007 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.431078 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.431103 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.431130 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.431151 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.535082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.535213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.535235 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.535260 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.535282 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.638211 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.638306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.638333 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.638366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.638388 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.664104 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.664986 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.666711 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.668241 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.670555 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.672154 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.674194 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.676070 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.677022 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.679016 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.680617 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.686257 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.688858 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.691121 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.692136 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.694389 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.696310 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.699213 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.700977 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.702618 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.705085 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.709088 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.710376 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.712009 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.714503 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.717036 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.718619 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.720142 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.725082 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.726336 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.729322 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.730580 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.732645 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.733836 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.735632 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.736467 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.737367 5120 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.738073 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.742351 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.743981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.744022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.744050 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.744073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.744088 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.745982 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.747092 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.748794 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.749596 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.751438 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.752460 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.753146 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.754693 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.756131 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.757746 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.758836 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.762249 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.763943 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.766597 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.768530 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.771791 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.773237 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.775820 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.777448 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.780432 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.780560 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.780452 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.780696 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.846914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.846968 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.846989 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.847012 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.847033 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.949095 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.949556 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.949724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.949867 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.949985 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.965630 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"21ae76ff49af3c94124e94ebfc5a40647926e7cefd430fa08f9a46aa89644954"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.969223 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d75a6089f40e10c0322333e18db0343075d62d64d522cd0bcc51243f6059de33"} Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.970859 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t6dx4" event={"ID":"0b722a01-9c2b-4e79-a301-c728aa5a90a1","Type":"ContainerStarted","Data":"788f849118005b5c70fdc08c15e370ba3720362a9122d71061ff3e2c4b323a07"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.972667 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjbh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5j87q_openshift-machine-config-operator(2fab2759-7b9c-43f9-a2b0-5e481a7f0cae): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.972852 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: source "/env/_master" Dec 08 19:30:23 crc kubenswrapper[5120]: set +o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 19:30:23 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 19:30:23 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Dec 08 19:30:23 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 19:30:23 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 19:30:23 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 19:30:23 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:23 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Dec 08 19:30:23 crc kubenswrapper[5120]: --webhook-port=9743 \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ho_enable} \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-interconnect \ Dec 08 19:30:23 crc kubenswrapper[5120]: --disable-approver \ Dec 08 19:30:23 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Dec 08 19:30:23 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.973033 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"f6596ef6c103634237f7ed5095045895c4db35b4979096e834eb0545c875670a"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.973976 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 19:30:23 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 19:30:23 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr8ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-t6dx4_openshift-multus(0b722a01-9c2b-4e79-a301-c728aa5a90a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.975144 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-t6dx4" podUID="0b722a01-9c2b-4e79-a301-c728aa5a90a1" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.975238 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: source "/env/_master" Dec 08 19:30:23 crc kubenswrapper[5120]: set +o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 19:30:23 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:23 crc kubenswrapper[5120]: --disable-webhook \ Dec 08 19:30:23 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.976029 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"a40383eaee87590ffd7736225b0e7f83f732c978107c5357d75573b319d9f93a"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.976369 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.976424 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:23 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Dec 08 19:30:23 crc kubenswrapper[5120]: else Dec 08 19:30:23 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 19:30:23 crc kubenswrapper[5120]: exit 1 Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.976458 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjbh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5j87q_openshift-machine-config-operator(2fab2759-7b9c-43f9-a2b0-5e481a7f0cae): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.977499 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.977583 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.977660 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tl7xr" event={"ID":"2d275b9e-8290-4f3d-8234-69302878d7d2","Type":"ContainerStarted","Data":"65839e3f3be91bd43474bcb45c40d209fce645951075571338ed5a0acd45cb4a"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.979405 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:23 crc kubenswrapper[5120]: apiVersion: v1 Dec 08 19:30:23 crc kubenswrapper[5120]: clusters: Dec 08 19:30:23 crc kubenswrapper[5120]: - cluster: Dec 08 19:30:23 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:23 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Dec 08 19:30:23 crc kubenswrapper[5120]: name: default-cluster Dec 08 19:30:23 crc kubenswrapper[5120]: contexts: Dec 08 19:30:23 crc kubenswrapper[5120]: - context: Dec 08 19:30:23 crc kubenswrapper[5120]: cluster: default-cluster Dec 08 19:30:23 crc kubenswrapper[5120]: namespace: default Dec 08 19:30:23 crc kubenswrapper[5120]: user: default-auth Dec 08 19:30:23 crc kubenswrapper[5120]: name: default-context Dec 08 19:30:23 crc kubenswrapper[5120]: current-context: default-context Dec 08 19:30:23 crc kubenswrapper[5120]: kind: Config Dec 08 19:30:23 crc kubenswrapper[5120]: preferences: {} Dec 08 19:30:23 crc kubenswrapper[5120]: users: Dec 08 19:30:23 crc kubenswrapper[5120]: - name: default-auth Dec 08 19:30:23 crc kubenswrapper[5120]: user: Dec 08 19:30:23 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:23 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:23 crc kubenswrapper[5120]: EOF Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7l9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-ccb8r_openshift-ovn-kubernetes(1a06e739-3597-44df-894c-328bdbcf0af2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.980288 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:23 crc kubenswrapper[5120]: while [ true ]; Dec 08 19:30:23 crc kubenswrapper[5120]: do Dec 08 19:30:23 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:23 crc kubenswrapper[5120]: echo $f Dec 08 19:30:23 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:23 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:23 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:23 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:23 crc kubenswrapper[5120]: else Dec 08 19:30:23 crc kubenswrapper[5120]: mkdir $reg_dir_path Dec 08 19:30:23 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:23 crc kubenswrapper[5120]: echo $d Dec 08 19:30:23 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:23 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:23 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait ${!} Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glcrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tl7xr_openshift-image-registry(2d275b9e-8290-4f3d-8234-69302878d7d2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.980376 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-88294" event={"ID":"9908c1e4-2e64-4aec-99cc-1ff468d1a145","Type":"ContainerStarted","Data":"6442cabb44bf7d827bc269212ca20d37c5818b50524e183beb934fb9f9fbe5ba"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.980469 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.981444 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tl7xr" podUID="2d275b9e-8290-4f3d-8234-69302878d7d2" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.981753 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"aac91aeb113bc917e8c0e3e8f8cec1e683c41946fd237e5ccda4c265f11ad455"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.983440 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:23 crc kubenswrapper[5120]: set -uo pipefail Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:23 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:23 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:23 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:23 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:23 crc kubenswrapper[5120]: exit 1 Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: while true; do Dec 08 19:30:23 crc kubenswrapper[5120]: declare -A svc_ips Dec 08 19:30:23 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Dec 08 19:30:23 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:23 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:23 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:23 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:23 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:23 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:23 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:23 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:23 crc kubenswrapper[5120]: for i in ${!cmds[*]} Dec 08 19:30:23 crc kubenswrapper[5120]: do Dec 08 19:30:23 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:23 crc kubenswrapper[5120]: break Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:23 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:23 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:23 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:23 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:23 crc kubenswrapper[5120]: continue Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Append resolver entries for services Dec 08 19:30:23 crc kubenswrapper[5120]: rc=0 Dec 08 19:30:23 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:23 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:23 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:23 crc kubenswrapper[5120]: continue Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:23 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:23 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:23 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:23 crc kubenswrapper[5120]: unset svc_ips Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dmd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-88294_openshift-dns(9908c1e4-2e64-4aec-99cc-1ff468d1a145): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.983708 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" event={"ID":"39f34113-e3de-4681-aa3e-c78a89bec2bf","Type":"ContainerStarted","Data":"499e314a660c04ddb8f2c87d4f586671f0f3470ea4971cdb1a293eaac193d7ce"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.984535 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-88294" podUID="9908c1e4-2e64-4aec-99cc-1ff468d1a145" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.984563 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.985657 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.985732 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerStarted","Data":"a8124434ce9f4a19a3381c2647a08a99b74e520196d965695799881010fa4d5f"} Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.987258 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:23 crc kubenswrapper[5120]: set -euo pipefail Dec 08 19:30:23 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 19:30:23 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 19:30:23 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Dec 08 19:30:23 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 19:30:23 crc kubenswrapper[5120]: TS=$(date +%s) Dec 08 19:30:23 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 19:30:23 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: log_missing_certs(){ Dec 08 19:30:23 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 19:30:23 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 19:30:23 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 19:30:23 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: } Dec 08 19:30:23 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 19:30:23 crc kubenswrapper[5120]: log_missing_certs Dec 08 19:30:23 crc kubenswrapper[5120]: sleep 5 Dec 08 19:30:23 crc kubenswrapper[5120]: done Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 19:30:23 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Dec 08 19:30:23 crc kubenswrapper[5120]: --logtostderr \ Dec 08 19:30:23 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Dec 08 19:30:23 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 19:30:23 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Dec 08 19:30:23 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Dec 08 19:30:23 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45hg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ftpb4_openshift-ovn-kubernetes(39f34113-e3de-4681-aa3e-c78a89bec2bf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.988044 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8v9xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-d7p4j_openshift-multus(72f36857-3aeb-4132-986f-d12fc2df547c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.989095 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" podUID="72f36857-3aeb-4132-986f-d12fc2df547c" Dec 08 19:30:23 crc kubenswrapper[5120]: I1208 19:30:23.989318 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5120]: E1208 19:30:23.991146 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:23 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: set -o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: source "/env/_master" Dec 08 19:30:23 crc kubenswrapper[5120]: set +o allexport Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Dec 08 19:30:23 crc kubenswrapper[5120]: # will rollout control plane pods as well Dec 08 19:30:23 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: route_advertisements_enable_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 19:30:23 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Dec 08 19:30:23 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Dec 08 19:30:23 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Dec 08 19:30:23 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Dec 08 19:30:23 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Dec 08 19:30:23 crc kubenswrapper[5120]: else Dec 08 19:30:23 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 19:30:23 crc kubenswrapper[5120]: exit 1 Dec 08 19:30:23 crc kubenswrapper[5120]: fi Dec 08 19:30:23 crc kubenswrapper[5120]: Dec 08 19:30:23 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 19:30:23 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-interconnect \ Dec 08 19:30:23 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 19:30:23 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 19:30:23 crc kubenswrapper[5120]: --metrics-enable-pprof \ Dec 08 19:30:23 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-ip=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-qos=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-egress-service=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-multicast \ Dec 08 19:30:23 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Dec 08 19:30:23 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Dec 08 19:30:23 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45hg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ftpb4_openshift-ovn-kubernetes(39f34113-e3de-4681-aa3e-c78a89bec2bf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:23 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:23.993108 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.004524 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.015322 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-88294" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9908c1e4-2e64-4aec-99cc-1ff468d1a145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dmd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-88294\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.044402 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a06e739-3597-44df-894c-328bdbcf0af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ccb8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.051888 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.051945 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.051960 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.051988 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.052009 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.054262 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f34113-e3de-4681-aa3e-c78a89bec2bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ftpb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.063070 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19365c54-8e61-4422-b662-f9c31e5c1f55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://828d41712fd94eedcd50f38c38d43fd474f30a968c24771c2db5da65f26e2b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.073213 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3e3c3-a34d-4095-9d82-4a51911f455c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://559e47716f6751afd73ab762aff2e0b80c0bf1d6a390a03122d9e9a7d55c06ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9b3b16166a06cb8d2dbfcb59f281d684c99fddf8dd2dcc530a0b297cfe79907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc7b415ceb349ee0dda497067d9c98ce687d91cbe06240e8064168aaf47e84c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.088226 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.102917 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.111888 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.118680 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35fbb2df-5282-4e19-b92d-5b7ffd03f707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hvzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.131257 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f36857-3aeb-4132-986f-d12fc2df547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d7p4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.149649 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdcac60-52ed-4b5c-bab2-539a0764add4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.154525 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.154579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.154596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.154620 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.154635 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.165245 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57c7e94e-3b5e-467b-83ad-227b41850996\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"319504 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:13.320266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2226956868/tls.crt::/tmp/serving-cert-2226956868/tls.key\\\\\\\"\\\\nI1208 19:30:13.647881 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.650378 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.650401 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.650429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.650436 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.655911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 19:30:13.655943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.655956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.655958 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.655961 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 19:30:13.655985 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 19:30:13.658085 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1208 19:30:13.658139 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF1208 19:30:13.658185 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.175061 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.186770 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tl7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d275b9e-8290-4f3d-8234-69302878d7d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tl7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.198712 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5j87q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.217958 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.229907 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a6b8a-c993-412e-afc5-96a43076734d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.238543 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5j87q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.255000 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.256586 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.256631 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.256650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.256675 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.256695 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.268494 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a6b8a-c993-412e-afc5-96a43076734d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.282618 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.298114 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.309406 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-88294" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9908c1e4-2e64-4aec-99cc-1ff468d1a145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dmd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-88294\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.316371 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.316456 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.316552 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316616 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316676 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316714 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316744 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316718 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:26.31669129 +0000 UTC m=+78.988797959 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316760 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316784 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:26.316761173 +0000 UTC m=+78.988867862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316812 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:26.316797664 +0000 UTC m=+78.988904403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.316589 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316874 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316912 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.316932 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.317011 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:26.31699251 +0000 UTC m=+78.989099199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.329588 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a06e739-3597-44df-894c-328bdbcf0af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ccb8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.343576 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f34113-e3de-4681-aa3e-c78a89bec2bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ftpb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.354415 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19365c54-8e61-4422-b662-f9c31e5c1f55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://828d41712fd94eedcd50f38c38d43fd474f30a968c24771c2db5da65f26e2b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.358431 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.358488 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.358507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.358530 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.358546 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.367343 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3e3c3-a34d-4095-9d82-4a51911f455c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://559e47716f6751afd73ab762aff2e0b80c0bf1d6a390a03122d9e9a7d55c06ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9b3b16166a06cb8d2dbfcb59f281d684c99fddf8dd2dcc530a0b297cfe79907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc7b415ceb349ee0dda497067d9c98ce687d91cbe06240e8064168aaf47e84c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.384007 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.397663 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.406272 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.415217 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35fbb2df-5282-4e19-b92d-5b7ffd03f707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hvzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.417650 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.417781 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.417833 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:26.417806369 +0000 UTC m=+79.089913038 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.417923 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.418014 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs podName:35fbb2df-5282-4e19-b92d-5b7ffd03f707 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:26.417995585 +0000 UTC m=+79.090102264 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs") pod "network-metrics-daemon-hvzp8" (UID: "35fbb2df-5282-4e19-b92d-5b7ffd03f707") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.431145 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f36857-3aeb-4132-986f-d12fc2df547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d7p4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.452382 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdcac60-52ed-4b5c-bab2-539a0764add4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.460981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.461024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.461041 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.461058 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.461072 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.471274 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57c7e94e-3b5e-467b-83ad-227b41850996\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"319504 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:13.320266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2226956868/tls.crt::/tmp/serving-cert-2226956868/tls.key\\\\\\\"\\\\nI1208 19:30:13.647881 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.650378 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.650401 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.650429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.650436 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.655911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 19:30:13.655943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.655956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.655958 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.655961 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 19:30:13.655985 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 19:30:13.658085 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1208 19:30:13.658139 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF1208 19:30:13.658185 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.480073 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.488661 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tl7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d275b9e-8290-4f3d-8234-69302878d7d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tl7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.563131 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.563252 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.563273 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.563300 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.563318 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.659219 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.659278 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.659384 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:24 crc kubenswrapper[5120]: E1208 19:30:24.659593 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.666270 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.666327 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.666344 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.666366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.666379 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.768837 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.768884 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.768894 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.768906 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.768915 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.871821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.871894 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.871918 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.871950 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.871973 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.974343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.974758 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.974786 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.974817 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5120]: I1208 19:30:24.974840 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.076612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.076699 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.076770 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.076801 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.076826 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.179129 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.179229 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.179255 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.179284 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.179307 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.281313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.281406 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.281431 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.281459 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.281481 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.383656 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.383769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.383795 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.383832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.383860 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.486358 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.486428 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.486446 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.486471 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.486488 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.588948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.588995 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.589004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.589020 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.589034 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.659832 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.659881 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:25 crc kubenswrapper[5120]: E1208 19:30:25.660057 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:25 crc kubenswrapper[5120]: E1208 19:30:25.660286 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.691566 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.691621 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.691633 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.691651 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.691664 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.793919 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.794002 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.794028 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.794057 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.794083 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.897249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.897322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.897339 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.897361 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.897377 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.999114 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.999168 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.999186 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.999231 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5120]: I1208 19:30:25.999242 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.101906 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.102003 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.102029 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.102064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.102089 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.204937 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.205014 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.205027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.205046 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.205061 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.307898 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.307991 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.308004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.308023 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.308034 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.337136 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.337284 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337349 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.337400 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337450 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:30.337427822 +0000 UTC m=+83.009534471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.337481 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337538 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337577 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337637 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337658 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337613 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337639 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:30.337608808 +0000 UTC m=+83.009715497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337729 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337755 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337770 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:30.337740742 +0000 UTC m=+83.009847421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.337813 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:30.337801334 +0000 UTC m=+83.009908053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.410502 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.410561 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.410577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.410598 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.410614 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.439034 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.439235 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:30.439209552 +0000 UTC m=+83.111316221 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.439367 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.439512 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.439725 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs podName:35fbb2df-5282-4e19-b92d-5b7ffd03f707 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:30.439693937 +0000 UTC m=+83.111800626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs") pod "network-metrics-daemon-hvzp8" (UID: "35fbb2df-5282-4e19-b92d-5b7ffd03f707") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.513055 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.513101 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.513112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.513128 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.513140 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.615227 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.615280 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.615317 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.615334 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.615346 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.659317 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.659398 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.659532 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:26 crc kubenswrapper[5120]: E1208 19:30:26.660403 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.717760 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.718176 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.718458 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.718620 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.718742 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.821101 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.821996 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.822130 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.822411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.822573 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.924911 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.925390 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.925607 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.925762 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5120]: I1208 19:30:26.925904 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.028125 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.028193 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.028206 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.028223 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.028239 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.130747 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.131076 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.131284 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.131492 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.131671 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.234338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.234749 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.234938 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.235135 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.235370 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.337942 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.338148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.338323 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.338352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.338362 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.441295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.441349 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.441362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.441378 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.441388 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.542995 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.543047 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.543060 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.543076 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.543090 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.645535 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.645604 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.645621 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.645643 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.645665 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.659458 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:27 crc kubenswrapper[5120]: E1208 19:30:27.659590 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.659598 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:27 crc kubenswrapper[5120]: E1208 19:30:27.659706 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.679693 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a6b8a-c993-412e-afc5-96a43076734d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.693711 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.707845 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.720310 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-88294" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9908c1e4-2e64-4aec-99cc-1ff468d1a145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dmd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-88294\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.747136 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a06e739-3597-44df-894c-328bdbcf0af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ccb8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.748424 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.749138 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.749324 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.749486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.749594 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.763481 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f34113-e3de-4681-aa3e-c78a89bec2bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ftpb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.775108 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19365c54-8e61-4422-b662-f9c31e5c1f55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://828d41712fd94eedcd50f38c38d43fd474f30a968c24771c2db5da65f26e2b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.793832 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3e3c3-a34d-4095-9d82-4a51911f455c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://559e47716f6751afd73ab762aff2e0b80c0bf1d6a390a03122d9e9a7d55c06ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9b3b16166a06cb8d2dbfcb59f281d684c99fddf8dd2dcc530a0b297cfe79907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc7b415ceb349ee0dda497067d9c98ce687d91cbe06240e8064168aaf47e84c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.806541 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.819534 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.825269 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.831087 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.841298 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35fbb2df-5282-4e19-b92d-5b7ffd03f707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hvzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.851509 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.851676 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.851764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.851836 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.851917 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.857011 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f36857-3aeb-4132-986f-d12fc2df547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d7p4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.887748 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdcac60-52ed-4b5c-bab2-539a0764add4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.906966 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57c7e94e-3b5e-467b-83ad-227b41850996\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"319504 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:13.320266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2226956868/tls.crt::/tmp/serving-cert-2226956868/tls.key\\\\\\\"\\\\nI1208 19:30:13.647881 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.650378 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.650401 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.650429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.650436 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.655911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 19:30:13.655943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.655956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.655958 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.655961 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 19:30:13.655985 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 19:30:13.658085 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1208 19:30:13.658139 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF1208 19:30:13.658185 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.922702 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.933923 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tl7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d275b9e-8290-4f3d-8234-69302878d7d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tl7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.944438 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5j87q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.953810 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.954089 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.954116 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.954125 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.954139 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5120]: I1208 19:30:27.954148 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.056700 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.056774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.056794 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.056820 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.056839 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.158963 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.159057 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.159085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.159116 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.159138 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.261208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.261292 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.261316 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.261346 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.261369 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.363648 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.363693 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.363703 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.363717 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.363726 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.466278 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.466373 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.466387 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.466402 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.466412 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.568833 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.568916 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.568944 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.568993 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.569022 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.659235 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:28 crc kubenswrapper[5120]: E1208 19:30:28.659368 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.659449 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:28 crc kubenswrapper[5120]: E1208 19:30:28.659689 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.671126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.671188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.671198 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.671213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.671225 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.772487 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.772532 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.772542 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.772557 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.772567 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.873974 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.874054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.874078 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.874108 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.874130 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.976494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.976560 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.976611 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.976644 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5120]: I1208 19:30:28.976660 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.078534 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.078582 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.078596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.078611 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.078625 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.180534 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.180614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.180639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.180669 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.180690 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.283230 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.283304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.283330 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.283364 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.283386 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.385809 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.385865 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.385878 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.385894 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.385906 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.487581 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.487654 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.487673 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.487699 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.487716 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.589734 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.589786 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.589798 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.589817 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.589829 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.659103 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.659107 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.659434 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.659619 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.691774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.691854 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.691882 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.691911 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.691936 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.693061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.693118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.693143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.693170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.693237 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.710131 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.714634 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.714723 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.714749 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.714778 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.714800 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.731912 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.736904 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.736965 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.736981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.736996 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.737008 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.748255 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.753228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.753288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.753303 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.753323 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.753339 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.765395 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.770198 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.770264 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.770288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.770306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.770318 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.800751 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9b47515e-a1bf-4035-b74a-f035e64eeafd\\\",\\\"systemUUID\\\":\\\"75177e4e-6d2e-4439-a8d9-c238b596e121\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:29 crc kubenswrapper[5120]: E1208 19:30:29.801021 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.802676 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.802724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.802735 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.802752 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.802763 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.905663 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.905701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.905710 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.905721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5120]: I1208 19:30:29.905731 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.008383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.008442 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.008460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.008482 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.008498 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.111267 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.111318 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.111334 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.111351 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.111363 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.213742 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.213792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.213805 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.213821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.213830 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.316828 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.316917 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.316963 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.317006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.317031 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.382348 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.382389 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.382413 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.382438 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.382754 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.382845 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.382769 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.382773 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.383022 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.382794 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.383034 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.382863 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.383002 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:38.382962537 +0000 UTC m=+91.055069216 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.383221 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:38.383203705 +0000 UTC m=+91.055310434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.383244 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:38.383236366 +0000 UTC m=+91.055343015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.383261 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:38.383257016 +0000 UTC m=+91.055363665 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.419118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.419214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.419228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.419246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.419257 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.482974 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.483285 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:38.483256789 +0000 UTC m=+91.155363478 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.483397 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.483551 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.483648 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs podName:35fbb2df-5282-4e19-b92d-5b7ffd03f707 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:38.483630681 +0000 UTC m=+91.155737370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs") pod "network-metrics-daemon-hvzp8" (UID: "35fbb2df-5282-4e19-b92d-5b7ffd03f707") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.521897 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.521981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.522000 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.522028 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.522046 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.625484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.625553 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.625572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.625598 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.625615 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.658883 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.659061 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.659105 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:30 crc kubenswrapper[5120]: E1208 19:30:30.659425 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.728264 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.728361 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.728386 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.728423 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.728449 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.831519 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.831611 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.831634 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.831662 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.831682 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.933614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.933696 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.933721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.933755 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5120]: I1208 19:30:30.933778 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.035851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.035907 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.035924 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.035946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.035965 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.138911 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.138983 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.139830 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.139866 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.139890 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.241688 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.241755 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.241764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.241779 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.241788 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.344552 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.344614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.344625 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.344642 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.344653 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.446632 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.446698 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.446710 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.446727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.446739 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.548954 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.549028 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.549046 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.549071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.549089 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.651462 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.651770 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.651871 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.651963 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.652060 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.658853 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.658877 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:31 crc kubenswrapper[5120]: E1208 19:30:31.659123 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:31 crc kubenswrapper[5120]: E1208 19:30:31.659227 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.754939 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.755006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.755024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.755047 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.755064 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.857728 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.857791 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.857810 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.857835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.857852 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.960507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.960597 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.960624 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.960657 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5120]: I1208 19:30:31.960682 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.063610 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.063713 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.063738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.063808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.063833 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.167025 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.167132 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.167208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.167244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.167266 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.269869 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.269946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.269977 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.270004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.270042 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.372591 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.372677 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.372716 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.372746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.372766 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.474407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.474447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.474464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.474484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.474496 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.576682 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.576769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.576804 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.576840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.576903 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.659744 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.659822 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:32 crc kubenswrapper[5120]: E1208 19:30:32.659913 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:32 crc kubenswrapper[5120]: E1208 19:30:32.660157 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.679024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.679073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.679086 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.679099 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.679110 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.781278 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.781348 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.781372 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.781403 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.781425 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.883464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.883546 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.883571 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.883617 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.883642 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.986469 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.986563 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.986591 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.986621 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5120]: I1208 19:30:32.986646 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.089833 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.090043 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.090079 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.090113 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.090137 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.192865 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.192980 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.193010 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.193041 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.193077 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.295557 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.295616 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.295628 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.295649 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.295661 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.399028 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.399089 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.399105 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.399124 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.399137 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.502398 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.502481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.502499 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.502529 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.502555 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.605824 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.605890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.605910 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.605945 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.605971 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.659907 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.659907 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:33 crc kubenswrapper[5120]: E1208 19:30:33.660199 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:33 crc kubenswrapper[5120]: E1208 19:30:33.660318 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.708963 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.709085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.709108 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.709140 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.709161 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.811377 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.811444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.811456 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.811474 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.811487 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.914472 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.914554 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.914570 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.914593 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5120]: I1208 19:30:33.914607 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.016941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.017042 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.017070 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.017107 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.017131 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.119405 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.119484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.119524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.119561 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.119586 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.221999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.222076 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.222087 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.222100 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.222110 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.324932 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.325008 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.325030 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.325052 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.325069 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.427535 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.427605 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.427622 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.427646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.427662 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.529893 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.529970 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.530154 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.530262 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.530294 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.632772 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.632870 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.632898 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.632928 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.632948 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.659816 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:34 crc kubenswrapper[5120]: E1208 19:30:34.660086 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.660121 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:34 crc kubenswrapper[5120]: E1208 19:30:34.660484 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:34 crc kubenswrapper[5120]: E1208 19:30:34.663504 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:34 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:34 crc kubenswrapper[5120]: while [ true ]; Dec 08 19:30:34 crc kubenswrapper[5120]: do Dec 08 19:30:34 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:34 crc kubenswrapper[5120]: echo $f Dec 08 19:30:34 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:34 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:34 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:34 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:34 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:34 crc kubenswrapper[5120]: else Dec 08 19:30:34 crc kubenswrapper[5120]: mkdir $reg_dir_path Dec 08 19:30:34 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:34 crc kubenswrapper[5120]: fi Dec 08 19:30:34 crc kubenswrapper[5120]: done Dec 08 19:30:34 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:34 crc kubenswrapper[5120]: echo $d Dec 08 19:30:34 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:34 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:34 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:34 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:34 crc kubenswrapper[5120]: fi Dec 08 19:30:34 crc kubenswrapper[5120]: done Dec 08 19:30:34 crc kubenswrapper[5120]: sleep 60 & wait ${!} Dec 08 19:30:34 crc kubenswrapper[5120]: done Dec 08 19:30:34 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glcrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tl7xr_openshift-image-registry(2d275b9e-8290-4f3d-8234-69302878d7d2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:34 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:34 crc kubenswrapper[5120]: E1208 19:30:34.664667 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:34 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:34 crc kubenswrapper[5120]: apiVersion: v1 Dec 08 19:30:34 crc kubenswrapper[5120]: clusters: Dec 08 19:30:34 crc kubenswrapper[5120]: - cluster: Dec 08 19:30:34 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:34 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Dec 08 19:30:34 crc kubenswrapper[5120]: name: default-cluster Dec 08 19:30:34 crc kubenswrapper[5120]: contexts: Dec 08 19:30:34 crc kubenswrapper[5120]: - context: Dec 08 19:30:34 crc kubenswrapper[5120]: cluster: default-cluster Dec 08 19:30:34 crc kubenswrapper[5120]: namespace: default Dec 08 19:30:34 crc kubenswrapper[5120]: user: default-auth Dec 08 19:30:34 crc kubenswrapper[5120]: name: default-context Dec 08 19:30:34 crc kubenswrapper[5120]: current-context: default-context Dec 08 19:30:34 crc kubenswrapper[5120]: kind: Config Dec 08 19:30:34 crc kubenswrapper[5120]: preferences: {} Dec 08 19:30:34 crc kubenswrapper[5120]: users: Dec 08 19:30:34 crc kubenswrapper[5120]: - name: default-auth Dec 08 19:30:34 crc kubenswrapper[5120]: user: Dec 08 19:30:34 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:34 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:34 crc kubenswrapper[5120]: EOF Dec 08 19:30:34 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7l9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-ccb8r_openshift-ovn-kubernetes(1a06e739-3597-44df-894c-328bdbcf0af2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:34 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:34 crc kubenswrapper[5120]: E1208 19:30:34.664856 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tl7xr" podUID="2d275b9e-8290-4f3d-8234-69302878d7d2" Dec 08 19:30:34 crc kubenswrapper[5120]: E1208 19:30:34.665981 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.736146 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.736214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.736226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.736244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.736258 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.838549 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.838619 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.838643 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.838671 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.838694 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.940820 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.940915 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.940948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.940981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5120]: I1208 19:30:34.941006 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.043886 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.043971 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.043992 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.044022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.044040 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.145883 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.145937 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.145954 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.145976 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.145993 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.248240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.248290 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.248303 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.248319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.248345 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.350646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.350718 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.350736 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.350765 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.350783 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.452653 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.452739 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.452752 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.452768 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.452780 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.555191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.555252 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.555263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.555279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.555291 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.688002 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.693527 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.694238 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.695560 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.695603 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.695652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.695669 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.695668 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.695703 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.698733 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.699205 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8v9xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-d7p4j_openshift-multus(72f36857-3aeb-4132-986f-d12fc2df547c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.699201 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:35 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:35 crc kubenswrapper[5120]: set -uo pipefail Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:35 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:35 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:35 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:35 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:35 crc kubenswrapper[5120]: exit 1 Dec 08 19:30:35 crc kubenswrapper[5120]: fi Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: while true; do Dec 08 19:30:35 crc kubenswrapper[5120]: declare -A svc_ips Dec 08 19:30:35 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Dec 08 19:30:35 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:35 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:35 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:35 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:35 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:35 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:35 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:35 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:35 crc kubenswrapper[5120]: for i in ${!cmds[*]} Dec 08 19:30:35 crc kubenswrapper[5120]: do Dec 08 19:30:35 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:35 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:35 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:35 crc kubenswrapper[5120]: break Dec 08 19:30:35 crc kubenswrapper[5120]: fi Dec 08 19:30:35 crc kubenswrapper[5120]: done Dec 08 19:30:35 crc kubenswrapper[5120]: done Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:35 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:35 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:35 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:35 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:35 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:35 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:35 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:35 crc kubenswrapper[5120]: continue Dec 08 19:30:35 crc kubenswrapper[5120]: fi Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: # Append resolver entries for services Dec 08 19:30:35 crc kubenswrapper[5120]: rc=0 Dec 08 19:30:35 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:35 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:35 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:35 crc kubenswrapper[5120]: done Dec 08 19:30:35 crc kubenswrapper[5120]: done Dec 08 19:30:35 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:35 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:35 crc kubenswrapper[5120]: continue Dec 08 19:30:35 crc kubenswrapper[5120]: fi Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: Dec 08 19:30:35 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:35 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:35 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:35 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:35 crc kubenswrapper[5120]: fi Dec 08 19:30:35 crc kubenswrapper[5120]: sleep 60 & wait Dec 08 19:30:35 crc kubenswrapper[5120]: unset svc_ips Dec 08 19:30:35 crc kubenswrapper[5120]: done Dec 08 19:30:35 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dmd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-88294_openshift-dns(9908c1e4-2e64-4aec-99cc-1ff468d1a145): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:35 crc kubenswrapper[5120]: > logger="UnhandledError" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.699700 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.699868 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.700309 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" podUID="72f36857-3aeb-4132-986f-d12fc2df547c" Dec 08 19:30:35 crc kubenswrapper[5120]: E1208 19:30:35.700340 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-88294" podUID="9908c1e4-2e64-4aec-99cc-1ff468d1a145" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.797504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.797560 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.797573 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.797590 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.797601 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.900114 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.900241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.900259 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.900280 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5120]: I1208 19:30:35.900295 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.002536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.002985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.003085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.003191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.003264 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.105804 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.105896 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.105907 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.105931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.105942 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.208550 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.208624 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.208646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.208668 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.208682 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.311026 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.311105 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.311130 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.311160 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.311217 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.414618 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.414703 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.414728 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.414759 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.414784 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.516439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.516516 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.516533 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.516550 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.516562 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.619719 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.619810 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.619837 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.619862 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.619883 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.659204 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:36 crc kubenswrapper[5120]: E1208 19:30:36.659430 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.660367 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:36 crc kubenswrapper[5120]: E1208 19:30:36.660568 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.660573 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:30:36 crc kubenswrapper[5120]: E1208 19:30:36.661051 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.722572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.722625 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.722647 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.722669 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.722686 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.825154 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.825261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.825285 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.825314 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.825335 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.928591 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.928643 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.928660 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.928679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5120]: I1208 19:30:36.928692 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.021399 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"c37ba9af194c07141c7b760700120f8c94e9ab27e9c2f285da8d340b93d288e9"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.021457 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.030884 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.031105 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.031192 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.031264 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.031357 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.038847 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a06e739-3597-44df-894c-328bdbcf0af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ccb8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.048618 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f34113-e3de-4681-aa3e-c78a89bec2bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ftpb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.058633 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19365c54-8e61-4422-b662-f9c31e5c1f55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://828d41712fd94eedcd50f38c38d43fd474f30a968c24771c2db5da65f26e2b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.075225 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3e3c3-a34d-4095-9d82-4a51911f455c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://559e47716f6751afd73ab762aff2e0b80c0bf1d6a390a03122d9e9a7d55c06ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9b3b16166a06cb8d2dbfcb59f281d684c99fddf8dd2dcc530a0b297cfe79907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc7b415ceb349ee0dda497067d9c98ce687d91cbe06240e8064168aaf47e84c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.085330 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.096408 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.110206 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.120913 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35fbb2df-5282-4e19-b92d-5b7ffd03f707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hvzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.133514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.133737 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.133811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.133907 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.133992 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.135912 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f36857-3aeb-4132-986f-d12fc2df547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d7p4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.155318 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdcac60-52ed-4b5c-bab2-539a0764add4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.170148 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57c7e94e-3b5e-467b-83ad-227b41850996\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"319504 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:13.320266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2226956868/tls.crt::/tmp/serving-cert-2226956868/tls.key\\\\\\\"\\\\nI1208 19:30:13.647881 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.650378 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.650401 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.650429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.650436 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.655911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 19:30:13.655943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.655956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.655958 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.655961 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 19:30:13.655985 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 19:30:13.658085 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1208 19:30:13.658139 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF1208 19:30:13.658185 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.185098 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.200526 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tl7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d275b9e-8290-4f3d-8234-69302878d7d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tl7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.211653 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c37ba9af194c07141c7b760700120f8c94e9ab27e9c2f285da8d340b93d288e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5j87q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.223144 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.231929 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a6b8a-c993-412e-afc5-96a43076734d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.239138 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.239215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.239226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.239238 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.239247 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.242781 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.252659 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.260302 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-88294" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9908c1e4-2e64-4aec-99cc-1ff468d1a145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dmd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-88294\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.341903 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.342046 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.342062 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.342077 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.342100 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.444438 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.444484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.444496 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.444513 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.444524 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.546352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.546423 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.546447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.546476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.546497 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.648488 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.648557 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.648575 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.648601 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.648625 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.658924 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:37 crc kubenswrapper[5120]: E1208 19:30:37.659145 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.659495 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:37 crc kubenswrapper[5120]: E1208 19:30:37.659695 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.684037 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a6b8a-c993-412e-afc5-96a43076734d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.698147 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.710737 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.719854 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-88294" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9908c1e4-2e64-4aec-99cc-1ff468d1a145\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dmd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-88294\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.740131 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a06e739-3597-44df-894c-328bdbcf0af2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7l9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ccb8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.750537 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.750574 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.750585 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.750597 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.750606 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.752495 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f34113-e3de-4681-aa3e-c78a89bec2bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45hg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ftpb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.763708 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19365c54-8e61-4422-b662-f9c31e5c1f55\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://828d41712fd94eedcd50f38c38d43fd474f30a968c24771c2db5da65f26e2b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a97e21c2564ee50d9bd442f9fbdad60984877571c61c0ab9b0a49978c9a02be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.775547 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddc3e3c3-a34d-4095-9d82-4a51911f455c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://559e47716f6751afd73ab762aff2e0b80c0bf1d6a390a03122d9e9a7d55c06ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9b3b16166a06cb8d2dbfcb59f281d684c99fddf8dd2dcc530a0b297cfe79907\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc7b415ceb349ee0dda497067d9c98ce687d91cbe06240e8064168aaf47e84c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.786291 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.796358 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.808008 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.817143 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35fbb2df-5282-4e19-b92d-5b7ffd03f707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hvzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.831369 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f36857-3aeb-4132-986f-d12fc2df547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d7p4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.850470 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdcac60-52ed-4b5c-bab2-539a0764add4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.852947 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.852996 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.853009 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.853026 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.853038 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.864549 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57c7e94e-3b5e-467b-83ad-227b41850996\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"319504 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:13.320266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2226956868/tls.crt::/tmp/serving-cert-2226956868/tls.key\\\\\\\"\\\\nI1208 19:30:13.647881 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.650378 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.650401 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.650429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.650436 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.655911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 19:30:13.655943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.655956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.655958 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.655961 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 19:30:13.655985 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 19:30:13.658085 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1208 19:30:13.658139 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF1208 19:30:13.658185 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.875474 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.883414 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tl7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d275b9e-8290-4f3d-8234-69302878d7d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tl7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.891320 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c37ba9af194c07141c7b760700120f8c94e9ab27e9c2f285da8d340b93d288e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5j87q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.902371 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.955195 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.955241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.955252 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.955268 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5120]: I1208 19:30:37.955281 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.057799 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.058227 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.058260 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.058292 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.058316 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.161465 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.162362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.162413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.162441 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.162461 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.265486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.265547 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.265568 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.265590 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.265623 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.367528 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.367580 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.367592 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.367610 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.367622 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.469791 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.469824 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.469835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.469847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.469856 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.482262 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.482293 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482376 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482463 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.482442101 +0000 UTC m=+107.154548800 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.482459 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.482563 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482386 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482601 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482611 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482577 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482649 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.482636967 +0000 UTC m=+107.154743616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482725 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.482693719 +0000 UTC m=+107.154800368 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482758 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482779 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482793 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.482861 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.482843483 +0000 UTC m=+107.154950132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.572455 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.572514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.572527 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.572546 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.572559 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.583370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.583496 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.583602 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.583559999 +0000 UTC m=+107.255666688 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.583618 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.583697 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs podName:35fbb2df-5282-4e19-b92d-5b7ffd03f707 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:54.583682063 +0000 UTC m=+107.255788712 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs") pod "network-metrics-daemon-hvzp8" (UID: "35fbb2df-5282-4e19-b92d-5b7ffd03f707") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.659582 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.659755 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.660850 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:38 crc kubenswrapper[5120]: E1208 19:30:38.660924 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.673792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.673825 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.673841 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.673854 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.673862 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.778112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.778159 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.778196 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.778213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.778225 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.881640 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.881709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.881734 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.881764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.881914 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.984640 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.984686 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.984699 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.984720 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5120]: I1208 19:30:38.984733 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.040259 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" event={"ID":"39f34113-e3de-4681-aa3e-c78a89bec2bf","Type":"ContainerStarted","Data":"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.040331 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" event={"ID":"39f34113-e3de-4681-aa3e-c78a89bec2bf","Type":"ContainerStarted","Data":"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.048386 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"5fd950a898229d4929cf82257869e658cfe2ff479fa3dba03070223ee9894eef"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.051905 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35fbb2df-5282-4e19-b92d-5b7ffd03f707\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dz6kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hvzp8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.070625 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72f36857-3aeb-4132-986f-d12fc2df547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8v9xh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d7p4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.086228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.086264 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.086282 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.086295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.086304 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.096877 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdcac60-52ed-4b5c-bab2-539a0764add4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://54a2ed5f699b45644940bd3ece3d5188808a7723683031d2706e540f11b2f1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4ebf2c439d49538da24983660e2863634e582668e8c75373e7b9e9d8243c9bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b275ecbeb134108527ad4251c1a0118a2beb3523c80e0c8f2078bfc07f70cf96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c6875e39d7f78461eee48106b7b999adb8d1bb9183344d52cc1ecbd65c0185e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffa6df04866e7813705a20cea3957751c9eb9a39199507e7c8db76bd1de7d5be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6aa063a9950bb2db52fc9c037fba0f7902c5ca875212bca70deb4d6a0dafc04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2cb30cb52ce0a40fe8a20605d486120f3c4182954252951746f6e7ff74e35e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3cff2ba18f0822fab31e64688a0fdd2084434eb99ccba82161ee652f378a0f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.111981 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57c7e94e-3b5e-467b-83ad-227b41850996\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"319504 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:13.320266 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2226956868/tls.crt::/tmp/serving-cert-2226956868/tls.key\\\\\\\"\\\\nI1208 19:30:13.647881 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.650378 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.650401 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.650429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.650436 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.655911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 19:30:13.655943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655948 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.655952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.655956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.655958 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.655961 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 19:30:13.655985 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 19:30:13.658085 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1208 19:30:13.658139 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF1208 19:30:13.658185 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.123283 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-t6dx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b722a01-9c2b-4e79-a301-c728aa5a90a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dr8ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t6dx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.131798 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tl7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d275b9e-8290-4f3d-8234-69302878d7d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glcrf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tl7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.144425 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c37ba9af194c07141c7b760700120f8c94e9ab27e9c2f285da8d340b93d288e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjbh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5j87q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.155783 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.170790 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a6b8a-c993-412e-afc5-96a43076734d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b162d875d850eed697128c07ef7c9dbe3b37fd6f298d195dde9824d2ed473030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9289e22746303b6ee985f528ed9feb724473c1ca5aefb94e19dc7a776a4a9de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://450094b2e7f59c5862cfd72cad7632f54ee2d96ad2a22e99dce40e69c6ce2672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e523373654c7a80ed33eb10afb6ea34378916b14309cd95fe6da2160b9268e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.182397 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.188699 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.188741 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.188751 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.188767 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.188780 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.196317 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.265383 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" podStartSLOduration=72.265366765 podStartE2EDuration="1m12.265366765s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:39.254204233 +0000 UTC m=+91.926310882" watchObservedRunningTime="2025-12-08 19:30:39.265366765 +0000 UTC m=+91.937473414" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.281109 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.281088592 podStartE2EDuration="17.281088592s" podCreationTimestamp="2025-12-08 19:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:39.280889985 +0000 UTC m=+91.952996634" watchObservedRunningTime="2025-12-08 19:30:39.281088592 +0000 UTC m=+91.953195251" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.281525 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.281517535 podStartE2EDuration="17.281517535s" podCreationTimestamp="2025-12-08 19:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:39.265793469 +0000 UTC m=+91.937900128" watchObservedRunningTime="2025-12-08 19:30:39.281517535 +0000 UTC m=+91.953624184" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.290912 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.290951 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.290960 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.290972 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.290983 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.392701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.392756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.392768 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.392786 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.392797 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.495729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.495779 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.495792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.495810 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.495826 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.597774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.597821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.597835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.597849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.597862 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.659952 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:39 crc kubenswrapper[5120]: E1208 19:30:39.660279 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.660323 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:39 crc kubenswrapper[5120]: E1208 19:30:39.660558 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.699702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.699754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.699764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.699781 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.699791 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.801574 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.801619 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.801631 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.801645 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.801656 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.903497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.903547 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.903561 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.903579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.903591 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.961102 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.961178 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.961190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.961204 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5120]: I1208 19:30:39.961214 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.006522 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr"] Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.013051 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.016820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.019063 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.019242 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.020143 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.037256 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=18.037226783 podStartE2EDuration="18.037226783s" podCreationTimestamp="2025-12-08 19:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:40.03653399 +0000 UTC m=+92.708640659" watchObservedRunningTime="2025-12-08 19:30:40.037226783 +0000 UTC m=+92.709333492" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.059097 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"e752ab686b5279ac5a209a8d4002d12b6f4594697d600ed088f6e48142276c3b"} Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.060391 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t6dx4" event={"ID":"0b722a01-9c2b-4e79-a301-c728aa5a90a1","Type":"ContainerStarted","Data":"8c51564e121d83416712137987b1424df67325c55496b115ae927d50368d542a"} Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.062392 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"67da55137e77dd9794b5a5292fc2ba41fad2af97d8ec5e2909efd4b9434be340"} Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.098803 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae270c20-5003-413c-913d-861c9343be80-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.098889 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae270c20-5003-413c-913d-861c9343be80-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.099018 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae270c20-5003-413c-913d-861c9343be80-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.099138 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae270c20-5003-413c-913d-861c9343be80-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.099226 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae270c20-5003-413c-913d-861c9343be80-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.136047 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.136027377 podStartE2EDuration="19.136027377s" podCreationTimestamp="2025-12-08 19:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:40.135068467 +0000 UTC m=+92.807175146" watchObservedRunningTime="2025-12-08 19:30:40.136027377 +0000 UTC m=+92.808134046" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.200307 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae270c20-5003-413c-913d-861c9343be80-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.200355 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae270c20-5003-413c-913d-861c9343be80-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.200397 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae270c20-5003-413c-913d-861c9343be80-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.200417 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae270c20-5003-413c-913d-861c9343be80-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.200461 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae270c20-5003-413c-913d-861c9343be80-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.200484 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae270c20-5003-413c-913d-861c9343be80-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.200502 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae270c20-5003-413c-913d-861c9343be80-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.201665 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae270c20-5003-413c-913d-861c9343be80-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.204265 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podStartSLOduration=73.204249368 podStartE2EDuration="1m13.204249368s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:40.188909054 +0000 UTC m=+92.861015703" watchObservedRunningTime="2025-12-08 19:30:40.204249368 +0000 UTC m=+92.876356037" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.211129 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae270c20-5003-413c-913d-861c9343be80-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.221749 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae270c20-5003-413c-913d-861c9343be80-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-m6prr\" (UID: \"ae270c20-5003-413c-913d-861c9343be80\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.244825 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-t6dx4" podStartSLOduration=73.244801066 podStartE2EDuration="1m13.244801066s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:40.231545759 +0000 UTC m=+92.903652408" watchObservedRunningTime="2025-12-08 19:30:40.244801066 +0000 UTC m=+92.916907725" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.328492 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.647264 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.655813 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.659448 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:40 crc kubenswrapper[5120]: I1208 19:30:40.659484 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:40 crc kubenswrapper[5120]: E1208 19:30:40.659561 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:40 crc kubenswrapper[5120]: E1208 19:30:40.659630 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:41 crc kubenswrapper[5120]: I1208 19:30:41.066407 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" event={"ID":"ae270c20-5003-413c-913d-861c9343be80","Type":"ContainerStarted","Data":"433e4c9b8648977c883df0677d2ea2e61fe321552ceb36f18374715074ce7e9a"} Dec 08 19:30:41 crc kubenswrapper[5120]: I1208 19:30:41.066471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" event={"ID":"ae270c20-5003-413c-913d-861c9343be80","Type":"ContainerStarted","Data":"fca32d620bde03e01edf0475e69de3aedf0aa8235262bf186c57759a42663bde"} Dec 08 19:30:41 crc kubenswrapper[5120]: I1208 19:30:41.083763 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-m6prr" podStartSLOduration=74.083738828 podStartE2EDuration="1m14.083738828s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:41.082935743 +0000 UTC m=+93.755042392" watchObservedRunningTime="2025-12-08 19:30:41.083738828 +0000 UTC m=+93.755845477" Dec 08 19:30:41 crc kubenswrapper[5120]: I1208 19:30:41.659011 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:41 crc kubenswrapper[5120]: E1208 19:30:41.659132 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:41 crc kubenswrapper[5120]: I1208 19:30:41.659011 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:41 crc kubenswrapper[5120]: E1208 19:30:41.659570 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:42 crc kubenswrapper[5120]: I1208 19:30:42.659210 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:42 crc kubenswrapper[5120]: I1208 19:30:42.659299 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:42 crc kubenswrapper[5120]: E1208 19:30:42.659470 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:42 crc kubenswrapper[5120]: E1208 19:30:42.659807 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:43 crc kubenswrapper[5120]: I1208 19:30:43.658977 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:43 crc kubenswrapper[5120]: E1208 19:30:43.659122 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:43 crc kubenswrapper[5120]: I1208 19:30:43.658981 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:43 crc kubenswrapper[5120]: E1208 19:30:43.659436 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:44 crc kubenswrapper[5120]: I1208 19:30:44.658948 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:44 crc kubenswrapper[5120]: I1208 19:30:44.659009 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:44 crc kubenswrapper[5120]: E1208 19:30:44.659102 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:44 crc kubenswrapper[5120]: E1208 19:30:44.659249 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:45 crc kubenswrapper[5120]: I1208 19:30:45.659520 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:45 crc kubenswrapper[5120]: E1208 19:30:45.659682 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:45 crc kubenswrapper[5120]: I1208 19:30:45.659717 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:45 crc kubenswrapper[5120]: E1208 19:30:45.659863 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:46 crc kubenswrapper[5120]: I1208 19:30:46.082582 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"bfde00eb4e4fc404eccbaae8d1c26ac0555c85950181aeba3e83f5a1805c0ab0"} Dec 08 19:30:46 crc kubenswrapper[5120]: I1208 19:30:46.659139 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:46 crc kubenswrapper[5120]: I1208 19:30:46.659416 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:46 crc kubenswrapper[5120]: E1208 19:30:46.659659 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:46 crc kubenswrapper[5120]: E1208 19:30:46.659828 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:47 crc kubenswrapper[5120]: I1208 19:30:47.088403 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="bfde00eb4e4fc404eccbaae8d1c26ac0555c85950181aeba3e83f5a1805c0ab0" exitCode=0 Dec 08 19:30:47 crc kubenswrapper[5120]: I1208 19:30:47.088467 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"bfde00eb4e4fc404eccbaae8d1c26ac0555c85950181aeba3e83f5a1805c0ab0"} Dec 08 19:30:47 crc kubenswrapper[5120]: I1208 19:30:47.662037 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:47 crc kubenswrapper[5120]: E1208 19:30:47.662133 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:47 crc kubenswrapper[5120]: I1208 19:30:47.662491 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:47 crc kubenswrapper[5120]: E1208 19:30:47.662572 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:47 crc kubenswrapper[5120]: I1208 19:30:47.664453 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:30:47 crc kubenswrapper[5120]: E1208 19:30:47.664671 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:47 crc kubenswrapper[5120]: I1208 19:30:47.941614 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.095139 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"6cd57446164e31349496d7ca088a6f54f742042ed3efdf8d3ab90b04b1ef4f1d"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.095215 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"c13470e1e2767a720f35b0fa139812edf32c47ac8f2c0ae7f6dfbee33556aa0c"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.095235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"106442a6e89d54037ddfad7a60188dbd555a3cf6df85d202834e370cd9267e06"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.095252 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"e3d8e6f6742a4bf08885c876aa877b67c8a3face2be5e3a3f11180d38a19e362"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.095266 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"8542f930df7975a087d84e4ac2d642e89142a6d27ad5d9da6f56b0955409f950"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.095281 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"11bd7c83e6371085562b076dd55467fec51fb89306fda486a06b01897670c376"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.096531 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tl7xr" event={"ID":"2d275b9e-8290-4f3d-8234-69302878d7d2","Type":"ContainerStarted","Data":"0b38ecf853e83f1c1cd8efcc2c40adb5db08d49f72a3bddca936ae4ae795280f"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.097702 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-88294" event={"ID":"9908c1e4-2e64-4aec-99cc-1ff468d1a145","Type":"ContainerStarted","Data":"01bc2b3540b1f612752bee01bb88f80ef64129c228dc8bddffcaf39ae4b54648"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.098906 5120 generic.go:358] "Generic (PLEG): container finished" podID="72f36857-3aeb-4132-986f-d12fc2df547c" containerID="f4081760c0fb9d6ae8fa4a5667dfa0a76594d36698cdaaa6bc57f3b869041e97" exitCode=0 Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.098971 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerDied","Data":"f4081760c0fb9d6ae8fa4a5667dfa0a76594d36698cdaaa6bc57f3b869041e97"} Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.112192 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tl7xr" podStartSLOduration=81.112156881 podStartE2EDuration="1m21.112156881s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:48.11146258 +0000 UTC m=+100.783569239" watchObservedRunningTime="2025-12-08 19:30:48.112156881 +0000 UTC m=+100.784263530" Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.659619 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:48 crc kubenswrapper[5120]: E1208 19:30:48.659851 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:48 crc kubenswrapper[5120]: I1208 19:30:48.660058 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:48 crc kubenswrapper[5120]: E1208 19:30:48.660556 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:49 crc kubenswrapper[5120]: I1208 19:30:49.105423 5120 generic.go:358] "Generic (PLEG): container finished" podID="72f36857-3aeb-4132-986f-d12fc2df547c" containerID="98287da88851b599a758fc3c70381d89e5dccdff79c6c3f1c9c346db631b8e84" exitCode=0 Dec 08 19:30:49 crc kubenswrapper[5120]: I1208 19:30:49.105506 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerDied","Data":"98287da88851b599a758fc3c70381d89e5dccdff79c6c3f1c9c346db631b8e84"} Dec 08 19:30:49 crc kubenswrapper[5120]: I1208 19:30:49.134534 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-88294" podStartSLOduration=82.134504525 podStartE2EDuration="1m22.134504525s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:48.139272276 +0000 UTC m=+100.811378925" watchObservedRunningTime="2025-12-08 19:30:49.134504525 +0000 UTC m=+101.806611214" Dec 08 19:30:49 crc kubenswrapper[5120]: I1208 19:30:49.659611 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:49 crc kubenswrapper[5120]: I1208 19:30:49.659733 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:49 crc kubenswrapper[5120]: E1208 19:30:49.659897 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:49 crc kubenswrapper[5120]: E1208 19:30:49.660064 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:50 crc kubenswrapper[5120]: I1208 19:30:50.111313 5120 generic.go:358] "Generic (PLEG): container finished" podID="72f36857-3aeb-4132-986f-d12fc2df547c" containerID="20377bb3d95ad7ef8c0ae6d86b6c20370be22d6bcd42fbc0b49e098220642979" exitCode=0 Dec 08 19:30:50 crc kubenswrapper[5120]: I1208 19:30:50.111392 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerDied","Data":"20377bb3d95ad7ef8c0ae6d86b6c20370be22d6bcd42fbc0b49e098220642979"} Dec 08 19:30:50 crc kubenswrapper[5120]: I1208 19:30:50.658894 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:50 crc kubenswrapper[5120]: I1208 19:30:50.658966 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:50 crc kubenswrapper[5120]: E1208 19:30:50.659107 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:50 crc kubenswrapper[5120]: E1208 19:30:50.659250 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:51 crc kubenswrapper[5120]: I1208 19:30:51.136751 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"5512c3b26249f7e15cc03298be9df7d8f611271bb4359ace6d7a5ed93d92dc1f"} Dec 08 19:30:51 crc kubenswrapper[5120]: I1208 19:30:51.142307 5120 generic.go:358] "Generic (PLEG): container finished" podID="72f36857-3aeb-4132-986f-d12fc2df547c" containerID="ed9089a562cd2884fcdb1470cd5921fe756d88c6032ad1e72ddb05d5603c9132" exitCode=0 Dec 08 19:30:51 crc kubenswrapper[5120]: I1208 19:30:51.142418 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerDied","Data":"ed9089a562cd2884fcdb1470cd5921fe756d88c6032ad1e72ddb05d5603c9132"} Dec 08 19:30:51 crc kubenswrapper[5120]: I1208 19:30:51.658994 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:51 crc kubenswrapper[5120]: I1208 19:30:51.659006 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:51 crc kubenswrapper[5120]: E1208 19:30:51.659214 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:51 crc kubenswrapper[5120]: E1208 19:30:51.659409 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:52 crc kubenswrapper[5120]: I1208 19:30:52.151116 5120 generic.go:358] "Generic (PLEG): container finished" podID="72f36857-3aeb-4132-986f-d12fc2df547c" containerID="b8ef138e2c4292b53f4331d3059bae392d1802b27f50f7dd3ee7cddcf3d555f7" exitCode=0 Dec 08 19:30:52 crc kubenswrapper[5120]: I1208 19:30:52.151223 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerDied","Data":"b8ef138e2c4292b53f4331d3059bae392d1802b27f50f7dd3ee7cddcf3d555f7"} Dec 08 19:30:52 crc kubenswrapper[5120]: I1208 19:30:52.659014 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:52 crc kubenswrapper[5120]: I1208 19:30:52.659038 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:52 crc kubenswrapper[5120]: E1208 19:30:52.659311 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:52 crc kubenswrapper[5120]: E1208 19:30:52.659387 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:53 crc kubenswrapper[5120]: I1208 19:30:53.157486 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"fc76f0346ebcbc6f9e92dc3d4b2a320b56c2ee193f33205b1096c2ecc0e232dc"} Dec 08 19:30:53 crc kubenswrapper[5120]: I1208 19:30:53.163472 5120 generic.go:358] "Generic (PLEG): container finished" podID="72f36857-3aeb-4132-986f-d12fc2df547c" containerID="d1ca046112454c2072ae5ea8b83f312d5e7db58d387d2d4765f2616cd879adc9" exitCode=0 Dec 08 19:30:53 crc kubenswrapper[5120]: I1208 19:30:53.163565 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerDied","Data":"d1ca046112454c2072ae5ea8b83f312d5e7db58d387d2d4765f2616cd879adc9"} Dec 08 19:30:53 crc kubenswrapper[5120]: I1208 19:30:53.659441 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:53 crc kubenswrapper[5120]: I1208 19:30:53.659494 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:53 crc kubenswrapper[5120]: E1208 19:30:53.659664 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:53 crc kubenswrapper[5120]: E1208 19:30:53.659846 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.172611 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerStarted","Data":"c14dd9da6dcf0894766923d72a8566c254a78dd9a49c4d238965665edf6bef2d"} Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.173048 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.178395 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" event={"ID":"72f36857-3aeb-4132-986f-d12fc2df547c","Type":"ContainerStarted","Data":"b8bf62f8781a8bd1d1862f831e0abf5c30ec0a0d1cd4376eabf11015c1272779"} Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.207435 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podStartSLOduration=87.207418161 podStartE2EDuration="1m27.207418161s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:54.205191151 +0000 UTC m=+106.877297850" watchObservedRunningTime="2025-12-08 19:30:54.207418161 +0000 UTC m=+106.879524820" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.209511 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.251418 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d7p4j" podStartSLOduration=87.251400948 podStartE2EDuration="1m27.251400948s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:54.249491268 +0000 UTC m=+106.921597937" watchObservedRunningTime="2025-12-08 19:30:54.251400948 +0000 UTC m=+106.923507607" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.566215 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.566277 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.566323 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.566397 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.566452 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.566506 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.566547 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.566567 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.566516 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:26.566495093 +0000 UTC m=+139.238601752 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.566712 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:26.566657768 +0000 UTC m=+139.238764467 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.566768 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.567020 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:26.566994329 +0000 UTC m=+139.239101018 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.567252 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.567283 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.567307 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.567398 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:26.56737673 +0000 UTC m=+139.239483419 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.659162 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.659375 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.659162 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.659816 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.667528 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:54 crc kubenswrapper[5120]: I1208 19:30:54.667776 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.667816 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:26.667771766 +0000 UTC m=+139.339878445 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.667900 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:54 crc kubenswrapper[5120]: E1208 19:30:54.667976 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs podName:35fbb2df-5282-4e19-b92d-5b7ffd03f707 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:26.667953022 +0000 UTC m=+139.340059711 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs") pod "network-metrics-daemon-hvzp8" (UID: "35fbb2df-5282-4e19-b92d-5b7ffd03f707") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:55 crc kubenswrapper[5120]: I1208 19:30:55.180992 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:55 crc kubenswrapper[5120]: I1208 19:30:55.181035 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:55 crc kubenswrapper[5120]: I1208 19:30:55.202235 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:30:55 crc kubenswrapper[5120]: I1208 19:30:55.575832 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hvzp8"] Dec 08 19:30:55 crc kubenswrapper[5120]: I1208 19:30:55.575941 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:55 crc kubenswrapper[5120]: E1208 19:30:55.576062 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:55 crc kubenswrapper[5120]: I1208 19:30:55.662516 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:55 crc kubenswrapper[5120]: I1208 19:30:55.662541 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:55 crc kubenswrapper[5120]: E1208 19:30:55.663020 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:55 crc kubenswrapper[5120]: E1208 19:30:55.663127 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:56 crc kubenswrapper[5120]: I1208 19:30:56.659060 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:56 crc kubenswrapper[5120]: E1208 19:30:56.659202 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:57 crc kubenswrapper[5120]: I1208 19:30:57.661445 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:57 crc kubenswrapper[5120]: E1208 19:30:57.661560 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:57 crc kubenswrapper[5120]: I1208 19:30:57.661639 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:57 crc kubenswrapper[5120]: I1208 19:30:57.661828 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:57 crc kubenswrapper[5120]: E1208 19:30:57.661948 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:57 crc kubenswrapper[5120]: E1208 19:30:57.662223 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:58 crc kubenswrapper[5120]: I1208 19:30:58.659324 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:58 crc kubenswrapper[5120]: E1208 19:30:58.659469 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:59 crc kubenswrapper[5120]: I1208 19:30:59.659292 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:59 crc kubenswrapper[5120]: I1208 19:30:59.659409 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:30:59 crc kubenswrapper[5120]: I1208 19:30:59.659411 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:59 crc kubenswrapper[5120]: E1208 19:30:59.660099 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:59 crc kubenswrapper[5120]: E1208 19:30:59.660357 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hvzp8" podUID="35fbb2df-5282-4e19-b92d-5b7ffd03f707" Dec 08 19:30:59 crc kubenswrapper[5120]: E1208 19:30:59.660541 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:59 crc kubenswrapper[5120]: I1208 19:30:59.661238 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.201280 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.202902 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810"} Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.203340 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.225890 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=38.22587351 podStartE2EDuration="38.22587351s" podCreationTimestamp="2025-12-08 19:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:00.225357344 +0000 UTC m=+112.897464013" watchObservedRunningTime="2025-12-08 19:31:00.22587351 +0000 UTC m=+112.897980169" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.348692 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.348993 5120 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.384345 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-rlnz2"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.387592 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.389830 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-bmg84"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.391004 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.391365 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.393343 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.393658 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.393768 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.395207 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.404281 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.405936 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5gcj2"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.408963 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.416627 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.416979 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.416746 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.417102 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.417334 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.417387 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.418099 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.418244 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-l8lnb"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.418361 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.418584 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.419370 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.420952 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cdcv9"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.423319 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.424069 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.426481 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.426615 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.426747 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440156 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184dedef-1313-4bdf-ba4b-d4e022e05b81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440232 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-serving-cert\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440330 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-client-ca\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440349 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-config\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440367 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-etcd-client\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440405 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440427 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28gg\" (UniqueName: \"kubernetes.io/projected/184dedef-1313-4bdf-ba4b-d4e022e05b81-kube-api-access-t28gg\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440448 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6c9e9e-2924-4940-baca-0d24615c9513-serving-cert\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440466 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440485 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d6c9e9e-2924-4940-baca-0d24615c9513-tmp\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440517 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-config\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440544 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-image-import-ca\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440565 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-encryption-config\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440585 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hb2w\" (UniqueName: \"kubernetes.io/projected/7d6c9e9e-2924-4940-baca-0d24615c9513-kube-api-access-2hb2w\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440627 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-audit\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440661 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.440679 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/184dedef-1313-4bdf-ba4b-d4e022e05b81-audit-dir\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.441342 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.448304 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.453558 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.456342 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-qrz2m"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.457091 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.458525 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.458650 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.458729 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.459176 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.459337 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.461418 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.462076 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.464232 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-xdqgf"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.464837 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.464976 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.465035 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.465241 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.465302 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.465683 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.466041 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.467367 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-rbgvm"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.468230 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.470423 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.470649 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.470951 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.471202 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.475596 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.478503 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.479036 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.479287 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.479505 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.479705 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.479832 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.479581 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.480125 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.481013 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.481110 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.481772 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.480008 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.482884 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.483066 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.483584 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.484735 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.484906 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.485311 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.486288 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.487029 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.487249 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.487451 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.487664 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.486419 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.490414 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.500700 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.504104 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.508513 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.508523 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.518139 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.518491 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.530307 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.533973 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.536136 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.536423 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-x44k9"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.536581 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.547595 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.549108 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.551897 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d6c9e9e-2924-4940-baca-0d24615c9513-tmp\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.551970 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d19622f-cb5b-4ce7-85be-594da032f286-machine-approver-tls\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552010 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-config\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552061 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-image-import-ca\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552089 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-encryption-config\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552114 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hb2w\" (UniqueName: \"kubernetes.io/projected/7d6c9e9e-2924-4940-baca-0d24615c9513-kube-api-access-2hb2w\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552190 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552219 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552247 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552275 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d19622f-cb5b-4ce7-85be-594da032f286-auth-proxy-config\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552304 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-audit\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552337 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzh5\" (UniqueName: \"kubernetes.io/projected/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-kube-api-access-7gzh5\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552368 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdnz\" (UniqueName: \"kubernetes.io/projected/1d19622f-cb5b-4ce7-85be-594da032f286-kube-api-access-trdnz\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552410 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552437 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d19622f-cb5b-4ce7-85be-594da032f286-config\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552470 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/184dedef-1313-4bdf-ba4b-d4e022e05b81-audit-dir\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552495 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552541 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184dedef-1313-4bdf-ba4b-d4e022e05b81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552569 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-serving-cert\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-client-ca\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552635 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-config\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552662 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-etcd-client\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552688 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552724 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t28gg\" (UniqueName: \"kubernetes.io/projected/184dedef-1313-4bdf-ba4b-d4e022e05b81-kube-api-access-t28gg\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552761 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6c9e9e-2924-4940-baca-0d24615c9513-serving-cert\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.552789 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.553759 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d6c9e9e-2924-4940-baca-0d24615c9513-tmp\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.554521 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.554623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/184dedef-1313-4bdf-ba4b-d4e022e05b81-audit-dir\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.554642 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-config\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.554727 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184dedef-1313-4bdf-ba4b-d4e022e05b81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.555929 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-client-ca\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.556536 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-config\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.557405 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.557872 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.558247 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.558531 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.558786 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.558917 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-image-import-ca\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.562884 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.564021 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.563377 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.563583 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.570757 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.571056 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.571255 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.571394 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.571567 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.571786 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.571931 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.572152 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.572921 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.582034 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-audit\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.584009 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-serving-cert\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.589760 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.589996 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.590241 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.592571 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.593394 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.593555 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.593801 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.594244 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.594372 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.594506 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.594543 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.594688 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.594777 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.594931 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.595155 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-encryption-config\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.595255 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.595369 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.595452 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.596114 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.596291 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.602104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.602199 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.603484 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wt7pp"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.604439 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-x44k9" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.606113 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/184dedef-1313-4bdf-ba4b-d4e022e05b81-etcd-client\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.606130 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-rlnz2"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.606152 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-bmg84"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.606260 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.606570 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cdcv9"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.608480 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.610469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.612455 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6c9e9e-2924-4940-baca-0d24615c9513-serving-cert\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.618363 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.619492 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.619582 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.629207 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.629836 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.630326 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.630925 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.631075 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.631321 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-xdqgf"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.631415 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.631649 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.631802 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.632594 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.633406 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.634383 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.634541 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.634983 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-l8lnb"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.637116 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.644485 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.645697 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.650376 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.650936 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.656236 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.657820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.658598 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659588 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659630 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d19622f-cb5b-4ce7-85be-594da032f286-auth-proxy-config\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659667 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzh5\" (UniqueName: \"kubernetes.io/projected/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-kube-api-access-7gzh5\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659695 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trdnz\" (UniqueName: \"kubernetes.io/projected/1d19622f-cb5b-4ce7-85be-594da032f286-kube-api-access-trdnz\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659739 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d19622f-cb5b-4ce7-85be-594da032f286-config\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659769 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659869 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d19622f-cb5b-4ce7-85be-594da032f286-machine-approver-tls\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659933 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659956 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.659979 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.660965 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d19622f-cb5b-4ce7-85be-594da032f286-config\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.661581 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d19622f-cb5b-4ce7-85be-594da032f286-auth-proxy-config\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.661997 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-tmp\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.662308 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.663817 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/184dedef-1313-4bdf-ba4b-d4e022e05b81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.663871 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.663897 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.663949 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.664066 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.664086 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.664378 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.668342 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d19622f-cb5b-4ce7-85be-594da032f286-machine-approver-tls\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.670078 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.675230 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-rbgvm"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.675273 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.675389 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.678267 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-tfvd8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.678474 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.680837 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.680857 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.680871 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.680985 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.683723 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.683889 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.688506 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.688569 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.688657 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.689787 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.691309 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.691461 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.693600 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.693737 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.696205 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htdxf"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.696342 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.699122 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-bfw55"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.699241 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.704874 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.709638 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.711265 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.711381 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.711488 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.716340 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-r96t4"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.716512 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720310 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-qrz2m"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720347 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720359 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720373 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720385 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720397 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-x44k9"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720407 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720462 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720477 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-tfvd8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.720487 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.727943 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.728529 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wt7pp"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.728577 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.728593 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.728605 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.728616 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.728627 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.728643 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sjvd8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.730507 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.732271 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bfzmj"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.732600 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.735597 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q9j44"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.735763 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.738510 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-d42vt"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.738664 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742130 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5gcj2"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742244 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742287 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742305 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742318 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742328 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-r96t4"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742340 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742353 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.742365 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sjvd8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.743383 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tjxf5"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.749817 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.749846 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q9j44"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.749858 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htdxf"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.749933 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.749947 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.750614 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tjxf5"] Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.770116 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.789980 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.810929 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.829644 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.850512 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.869577 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.891012 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.910126 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.931380 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.950449 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.970377 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 19:31:00 crc kubenswrapper[5120]: I1208 19:31:00.990717 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.009919 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.048210 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28gg\" (UniqueName: \"kubernetes.io/projected/184dedef-1313-4bdf-ba4b-d4e022e05b81-kube-api-access-t28gg\") pod \"apiserver-9ddfb9f55-rlnz2\" (UID: \"184dedef-1313-4bdf-ba4b-d4e022e05b81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.054235 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.073749 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.091017 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.130534 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hb2w\" (UniqueName: \"kubernetes.io/projected/7d6c9e9e-2924-4940-baca-0d24615c9513-kube-api-access-2hb2w\") pod \"controller-manager-65b6cccf98-bmg84\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.150565 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.170340 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.189964 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.210872 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.231075 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.251060 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.272636 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.310057 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.311645 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.327840 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.330893 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.365757 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367756 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-certificates\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367792 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-audit-policies\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-config\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367853 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-bound-sa-token\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a1d6f36-c28b-4eea-b32e-37557479492e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367908 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdc330d9-fcf6-4bbb-91bb-decbb280945b-images\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367931 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swdk\" (UniqueName: \"kubernetes.io/projected/74731f18-a532-487c-b679-3d850acf1edd-kube-api-access-5swdk\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367951 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367975 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzsbn\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-kube-api-access-wzsbn\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.367998 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc330d9-fcf6-4bbb-91bb-decbb280945b-config\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368018 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-serving-cert\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368037 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8976d94f-0a56-417c-9460-885a2d7f0155-tmp\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368081 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-encryption-config\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368107 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-audit-policies\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368126 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-audit-dir\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368193 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdc330d9-fcf6-4bbb-91bb-decbb280945b-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368214 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b303839-3284-4b71-b006-360460533813-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368234 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368256 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368293 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368314 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvxh\" (UniqueName: \"kubernetes.io/projected/3d98f7b7-d51d-44f4-a84c-edfae10c5964-kube-api-access-wvvxh\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.368430 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.369075 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.369562 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1d6f36-c28b-4eea-b32e-37557479492e-serving-cert\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.369721 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.369752 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/977ffd05-e876-4f78-95f6-80c1e31b71c3-trusted-ca\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.369874 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1750a48-cdf8-4fc3-b3c1-4577527c256b-ca-trust-extracted\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.369982 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53280245-e8bc-4afc-805c-95ed67a48227-metrics-tls\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370035 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb8bx\" (UniqueName: \"kubernetes.io/projected/53280245-e8bc-4afc-805c-95ed67a48227-kube-api-access-cb8bx\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370069 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-etcd-client\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370342 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxfp\" (UniqueName: \"kubernetes.io/projected/4b303839-3284-4b71-b006-360460533813-kube-api-access-bzxfp\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370387 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370410 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a1d6f36-c28b-4eea-b32e-37557479492e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370451 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-service-ca\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370472 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-config\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370519 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-oauth-serving-cert\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370540 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbz28\" (UniqueName: \"kubernetes.io/projected/fdc330d9-fcf6-4bbb-91bb-decbb280945b-kube-api-access-mbz28\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370560 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977ffd05-e876-4f78-95f6-80c1e31b71c3-config\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370594 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-trusted-ca-bundle\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370614 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74731f18-a532-487c-b679-3d850acf1edd-audit-dir\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370647 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370670 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-config\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.370690 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977ffd05-e876-4f78-95f6-80c1e31b71c3-serving-cert\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.371018 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.871002266 +0000 UTC m=+114.543108915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371341 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55nxq\" (UniqueName: \"kubernetes.io/projected/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-kube-api-access-55nxq\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371437 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k86bk\" (UniqueName: \"kubernetes.io/projected/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-kube-api-access-k86bk\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371465 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqfs\" (UniqueName: \"kubernetes.io/projected/977ffd05-e876-4f78-95f6-80c1e31b71c3-kube-api-access-9tqfs\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371500 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98f7b7-d51d-44f4-a84c-edfae10c5964-serving-cert\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371526 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/53280245-e8bc-4afc-805c-95ed67a48227-tmp-dir\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371899 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-etcd-serving-ca\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371946 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.371988 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1d6f36-c28b-4eea-b32e-37557479492e-config\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372091 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-client-ca\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372188 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-tls\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372223 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b303839-3284-4b71-b006-360460533813-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372422 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xr47\" (UniqueName: \"kubernetes.io/projected/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-kube-api-access-2xr47\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372458 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/97b101ad-fe48-408d-8965-af78f6b66e12-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bz7k2\" (UID: \"97b101ad-fe48-408d-8965-af78f6b66e12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372598 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-oauth-config\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372629 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b303839-3284-4b71-b006-360460533813-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372664 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-config\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.372973 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.373007 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.373129 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1750a48-cdf8-4fc3-b3c1-4577527c256b-installation-pull-secrets\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.373157 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8976d94f-0a56-417c-9460-885a2d7f0155-serving-cert\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.373194 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l59h\" (UniqueName: \"kubernetes.io/projected/8976d94f-0a56-417c-9460-885a2d7f0155-kube-api-access-4l59h\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.373618 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-trusted-ca\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.373705 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-serving-cert\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.373736 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfv2f\" (UniqueName: \"kubernetes.io/projected/97b101ad-fe48-408d-8965-af78f6b66e12-kube-api-access-wfv2f\") pod \"cluster-samples-operator-6b564684c8-bz7k2\" (UID: \"97b101ad-fe48-408d-8965-af78f6b66e12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.384723 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzh5\" (UniqueName: \"kubernetes.io/projected/f9a01a39-6668-4d4b-92d6-1435fd9c1cf9-kube-api-access-7gzh5\") pod \"cluster-image-registry-operator-86c45576b9-r5zn7\" (UID: \"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.411391 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.416694 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdnz\" (UniqueName: \"kubernetes.io/projected/1d19622f-cb5b-4ce7-85be-594da032f286-kube-api-access-trdnz\") pod \"machine-approver-54c688565-p4lk6\" (UID: \"1d19622f-cb5b-4ce7-85be-594da032f286\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.433409 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.436924 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.451498 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.470420 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.474786 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.475101 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.975070217 +0000 UTC m=+114.647176866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475186 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-audit-dir\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-certs\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475230 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d221c560-78a6-47ae-a3da-e0a6bd649e8b-available-featuregates\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475246 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqzsm\" (UniqueName: \"kubernetes.io/projected/157ad135-aa3e-4834-adc9-c5c417319d33-kube-api-access-bqzsm\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475266 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d70b549b-1501-43b7-9b26-501ab5e58cf5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475285 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdc330d9-fcf6-4bbb-91bb-decbb280945b-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475315 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475331 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475351 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvxh\" (UniqueName: \"kubernetes.io/projected/3d98f7b7-d51d-44f4-a84c-edfae10c5964-kube-api-access-wvvxh\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475367 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475384 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475401 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-config\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475417 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz74b\" (UniqueName: \"kubernetes.io/projected/5307c139-e352-4fcf-97e2-07e71a2e40ed-kube-api-access-wz74b\") pod \"multus-admission-controller-69db94689b-r96t4\" (UID: \"5307c139-e352-4fcf-97e2-07e71a2e40ed\") " pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475434 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/977ffd05-e876-4f78-95f6-80c1e31b71c3-trusted-ca\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475449 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvprs\" (UniqueName: \"kubernetes.io/projected/312d44e5-0f47-4c8c-a43d-5dba1a9434fc-kube-api-access-wvprs\") pod \"ingress-canary-q9j44\" (UID: \"312d44e5-0f47-4c8c-a43d-5dba1a9434fc\") " pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475466 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-mountpoint-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475480 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-default-certificate\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475493 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c495775-98b3-40e4-a984-75b3a9b6209c-service-ca-bundle\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1750a48-cdf8-4fc3-b3c1-4577527c256b-ca-trust-extracted\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475525 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzxfp\" (UniqueName: \"kubernetes.io/projected/4b303839-3284-4b71-b006-360460533813-kube-api-access-bzxfp\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475542 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475558 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1d77b19d-39e0-468f-b4b4-63ec407092de-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475574 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-service-ca\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475606 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-config\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-oauth-serving-cert\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475640 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mbz28\" (UniqueName: \"kubernetes.io/projected/fdc330d9-fcf6-4bbb-91bb-decbb280945b-kube-api-access-mbz28\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475654 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977ffd05-e876-4f78-95f6-80c1e31b71c3-config\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475672 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-trusted-ca-bundle\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475688 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475703 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-config\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475719 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d29bd20f-dc09-4486-84f2-4debbc6f931f-signing-cabundle\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475736 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trjml\" (UniqueName: \"kubernetes.io/projected/8e207158-b72d-4f6a-9e10-6647b59b0cf1-kube-api-access-trjml\") pod \"control-plane-machine-set-operator-75ffdb6fcd-sf9b8\" (UID: \"8e207158-b72d-4f6a-9e10-6647b59b0cf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475753 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be87d8a1-7626-450d-98af-e9b2bdbf91a1-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475772 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55nxq\" (UniqueName: \"kubernetes.io/projected/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-kube-api-access-55nxq\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475787 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76f020ff-36ee-4661-a02f-9fb3f5a504ac-tmp\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475805 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/25a0f226-35a2-4d2a-bae8-caa664a3f12f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475833 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475849 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-stats-auth\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.475866 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k86bk\" (UniqueName: \"kubernetes.io/projected/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-kube-api-access-k86bk\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476078 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-socket-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476095 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mmh\" (UniqueName: \"kubernetes.io/projected/bdfdfbe7-994c-4e98-ac93-9627b4264429-kube-api-access-l6mmh\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476130 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/17257e16-3a91-4dec-b45e-bc409c0c9a09-tmp-dir\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476146 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6688fee1-c5c8-4299-a2d3-5933a57d2099-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476179 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/53280245-e8bc-4afc-805c-95ed67a48227-tmp-dir\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476196 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-webhook-cert\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b54r2\" (UniqueName: \"kubernetes.io/projected/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-kube-api-access-b54r2\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476225 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdj2\" (UniqueName: \"kubernetes.io/projected/1d77b19d-39e0-468f-b4b4-63ec407092de-kube-api-access-5vdj2\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476257 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476272 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17257e16-3a91-4dec-b45e-bc409c0c9a09-metrics-tls\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476291 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/157ad135-aa3e-4834-adc9-c5c417319d33-tmp-dir\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476310 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-client-ca\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476326 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-node-bootstrap-token\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476341 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gmbg\" (UniqueName: \"kubernetes.io/projected/92f9c73f-321e-4610-9cc8-cf819293369d-kube-api-access-9gmbg\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476356 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6688fee1-c5c8-4299-a2d3-5933a57d2099-srv-cert\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476390 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d221c560-78a6-47ae-a3da-e0a6bd649e8b-serving-cert\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476409 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-config-volume\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476430 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-oauth-config\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476446 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b303839-3284-4b71-b006-360460533813-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476466 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476483 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzk44\" (UniqueName: \"kubernetes.io/projected/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-kube-api-access-bzk44\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476499 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1d77b19d-39e0-468f-b4b4-63ec407092de-ready\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476521 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1750a48-cdf8-4fc3-b3c1-4577527c256b-installation-pull-secrets\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476538 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8976d94f-0a56-417c-9460-885a2d7f0155-serving-cert\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476555 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4l59h\" (UniqueName: \"kubernetes.io/projected/8976d94f-0a56-417c-9460-885a2d7f0155-kube-api-access-4l59h\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476575 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-trusted-ca\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-serving-cert\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wfv2f\" (UniqueName: \"kubernetes.io/projected/97b101ad-fe48-408d-8965-af78f6b66e12-kube-api-access-wfv2f\") pod \"cluster-samples-operator-6b564684c8-bz7k2\" (UID: \"97b101ad-fe48-408d-8965-af78f6b66e12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-config\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e207158-b72d-4f6a-9e10-6647b59b0cf1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-sf9b8\" (UID: \"8e207158-b72d-4f6a-9e10-6647b59b0cf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476668 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-tmpfs\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476683 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlmqv\" (UniqueName: \"kubernetes.io/projected/c7cd1071-0374-4bcd-b58b-2614dba70805-kube-api-access-hlmqv\") pod \"package-server-manager-77f986bd66-dkxdr\" (UID: \"c7cd1071-0374-4bcd-b58b-2614dba70805\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476699 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5307c139-e352-4fcf-97e2-07e71a2e40ed-webhook-certs\") pod \"multus-admission-controller-69db94689b-r96t4\" (UID: \"5307c139-e352-4fcf-97e2-07e71a2e40ed\") " pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476717 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-audit-policies\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476731 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70b549b-1501-43b7-9b26-501ab5e58cf5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476747 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-metrics-certs\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476765 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a1d6f36-c28b-4eea-b32e-37557479492e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476781 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-secret-volume\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476804 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5swdk\" (UniqueName: \"kubernetes.io/projected/74731f18-a532-487c-b679-3d850acf1edd-kube-api-access-5swdk\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476823 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476841 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fc709ff6-80b7-4208-a534-3311e895e710-images\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476858 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wzsbn\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-kube-api-access-wzsbn\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476874 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-serving-cert\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476890 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8976d94f-0a56-417c-9460-885a2d7f0155-tmp\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476905 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-service-ca\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476925 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-plugins-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476946 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-audit-policies\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476962 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476979 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zjv5\" (UniqueName: \"kubernetes.io/projected/17257e16-3a91-4dec-b45e-bc409c0c9a09-kube-api-access-6zjv5\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.476994 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92f9c73f-321e-4610-9cc8-cf819293369d-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477031 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b303839-3284-4b71-b006-360460533813-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477048 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d70b549b-1501-43b7-9b26-501ab5e58cf5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477065 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be87d8a1-7626-450d-98af-e9b2bdbf91a1-config\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477103 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477121 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1d6f36-c28b-4eea-b32e-37557479492e-serving-cert\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477143 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477182 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a0f226-35a2-4d2a-bae8-caa664a3f12f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477205 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-etcd-client\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477222 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a1d6f36-c28b-4eea-b32e-37557479492e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477634 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6688fee1-c5c8-4299-a2d3-5933a57d2099-tmpfs\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477682 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53280245-e8bc-4afc-805c-95ed67a48227-metrics-tls\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477710 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cb8bx\" (UniqueName: \"kubernetes.io/projected/53280245-e8bc-4afc-805c-95ed67a48227-kube-api-access-cb8bx\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477736 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc709ff6-80b7-4208-a534-3311e895e710-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477791 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a0f226-35a2-4d2a-bae8-caa664a3f12f-config\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477848 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74731f18-a532-487c-b679-3d850acf1edd-audit-dir\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477866 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-ca\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477886 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/312d44e5-0f47-4c8c-a43d-5dba1a9434fc-cert\") pod \"ingress-canary-q9j44\" (UID: \"312d44e5-0f47-4c8c-a43d-5dba1a9434fc\") " pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.477924 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977ffd05-e876-4f78-95f6-80c1e31b71c3-serving-cert\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478203 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8976d94f-0a56-417c-9460-885a2d7f0155-tmp\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478300 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974m8\" (UniqueName: \"kubernetes.io/projected/1a0ac583-3330-481c-80b9-a58ab58c4786-kube-api-access-974m8\") pod \"migrator-866fcbc849-9zgcs\" (UID: \"1a0ac583-3330-481c-80b9-a58ab58c4786\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478328 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74btt\" (UniqueName: \"kubernetes.io/projected/2d163a97-86f5-4aa7-8013-c8f6a860724c-kube-api-access-74btt\") pod \"downloads-747b44746d-x44k9\" (UID: \"2d163a97-86f5-4aa7-8013-c8f6a860724c\") " pod="openshift-console/downloads-747b44746d-x44k9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478355 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-config\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478379 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqfs\" (UniqueName: \"kubernetes.io/projected/977ffd05-e876-4f78-95f6-80c1e31b71c3-kube-api-access-9tqfs\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478400 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98f7b7-d51d-44f4-a84c-edfae10c5964-serving-cert\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478420 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7cd1071-0374-4bcd-b58b-2614dba70805-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-dkxdr\" (UID: \"c7cd1071-0374-4bcd-b58b-2614dba70805\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478422 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-audit-policies\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478440 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478463 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjbz\" (UniqueName: \"kubernetes.io/projected/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-kube-api-access-lpjbz\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92f9c73f-321e-4610-9cc8-cf819293369d-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478499 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhjwb\" (UniqueName: \"kubernetes.io/projected/7e253e29-6c77-46b2-94a2-b75a825444f0-kube-api-access-lhjwb\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478541 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-audit-dir\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478757 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4a1d6f36-c28b-4eea-b32e-37557479492e-tmp-dir\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.478922 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-audit-policies\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.479519 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/977ffd05-e876-4f78-95f6-80c1e31b71c3-trusted-ca\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.479553 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.479580 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-client-ca\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.479885 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1750a48-cdf8-4fc3-b3c1-4577527c256b-ca-trust-extracted\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.480380 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-service-ca\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.480649 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b303839-3284-4b71-b006-360460533813-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.480926 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-oauth-serving-cert\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.482183 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977ffd05-e876-4f78-95f6-80c1e31b71c3-config\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.482555 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-config\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.482754 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.483047 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.483515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/53280245-e8bc-4afc-805c-95ed67a48227-tmp-dir\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.483636 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d98f7b7-d51d-44f4-a84c-edfae10c5964-config\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.483911 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-etcd-serving-ca\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.483958 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-serving-cert\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.483980 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfsc8\" (UniqueName: \"kubernetes.io/projected/d221c560-78a6-47ae-a3da-e0a6bd649e8b-kube-api-access-wfsc8\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484005 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1d6f36-c28b-4eea-b32e-37557479492e-config\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484025 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8jwz\" (UniqueName: \"kubernetes.io/projected/76f020ff-36ee-4661-a02f-9fb3f5a504ac-kube-api-access-j8jwz\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484047 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d29bd20f-dc09-4486-84f2-4debbc6f931f-signing-key\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484098 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrjc\" (UniqueName: \"kubernetes.io/projected/fc709ff6-80b7-4208-a534-3311e895e710-kube-api-access-dgrjc\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484154 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74731f18-a532-487c-b679-3d850acf1edd-audit-dir\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.484358 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.9843395 +0000 UTC m=+114.656446229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484455 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-trusted-ca-bundle\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484852 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-trusted-ca-bundle\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.484953 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-client\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485004 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d70b549b-1501-43b7-9b26-501ab5e58cf5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485054 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-tls\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485071 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-etcd-serving-ca\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485094 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b303839-3284-4b71-b006-360460533813-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485138 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc709ff6-80b7-4208-a534-3311e895e710-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485220 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xr47\" (UniqueName: \"kubernetes.io/projected/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-kube-api-access-2xr47\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485312 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/97b101ad-fe48-408d-8965-af78f6b66e12-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bz7k2\" (UID: \"97b101ad-fe48-408d-8965-af78f6b66e12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485446 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485487 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwphz\" (UniqueName: \"kubernetes.io/projected/be87d8a1-7626-450d-98af-e9b2bdbf91a1-kube-api-access-nwphz\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-config\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485591 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6tp\" (UniqueName: \"kubernetes.io/projected/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-kube-api-access-sc6tp\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485623 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-registration-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485665 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485725 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-serving-cert\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.485854 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1d6f36-c28b-4eea-b32e-37557479492e-config\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486325 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1d6f36-c28b-4eea-b32e-37557479492e-serving-cert\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486525 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcp6f\" (UniqueName: \"kubernetes.io/projected/6688fee1-c5c8-4299-a2d3-5933a57d2099-kube-api-access-mcp6f\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486569 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7e253e29-6c77-46b2-94a2-b75a825444f0-tmpfs\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486602 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486632 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7e253e29-6c77-46b2-94a2-b75a825444f0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486663 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-certificates\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486707 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-config\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486741 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a0f226-35a2-4d2a-bae8-caa664a3f12f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486915 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.486994 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-bound-sa-token\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487038 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157ad135-aa3e-4834-adc9-c5c417319d33-serving-cert\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487197 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdc330d9-fcf6-4bbb-91bb-decbb280945b-images\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487235 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-apiservice-cert\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487280 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc330d9-fcf6-4bbb-91bb-decbb280945b-config\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgllw\" (UniqueName: \"kubernetes.io/projected/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-kube-api-access-fgllw\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487334 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-csi-data-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487361 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvb8\" (UniqueName: \"kubernetes.io/projected/4c495775-98b3-40e4-a984-75b3a9b6209c-kube-api-access-7lvb8\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487430 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7e253e29-6c77-46b2-94a2-b75a825444f0-srv-cert\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487469 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-encryption-config\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487511 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17257e16-3a91-4dec-b45e-bc409c0c9a09-config-volume\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487539 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfzg8\" (UniqueName: \"kubernetes.io/projected/d29bd20f-dc09-4486-84f2-4debbc6f931f-kube-api-access-sfzg8\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.487570 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1d77b19d-39e0-468f-b4b4-63ec407092de-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.488501 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdc330d9-fcf6-4bbb-91bb-decbb280945b-images\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.488654 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-certificates\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.488867 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.489190 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-trusted-ca\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.489834 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-config\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.489972 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc330d9-fcf6-4bbb-91bb-decbb280945b-config\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.491076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.491258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.491717 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-config\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.493337 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.493561 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-serving-cert\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.494200 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53280245-e8bc-4afc-805c-95ed67a48227-metrics-tls\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.494472 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-encryption-config\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.494827 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1750a48-cdf8-4fc3-b3c1-4577527c256b-installation-pull-secrets\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.494901 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.495266 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-etcd-client\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.495867 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b303839-3284-4b71-b006-360460533813-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496343 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-tls\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496396 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496401 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d98f7b7-d51d-44f4-a84c-edfae10c5964-serving-cert\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496473 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/977ffd05-e876-4f78-95f6-80c1e31b71c3-serving-cert\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496759 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496798 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-console-oauth-config\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496813 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.496968 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdc330d9-fcf6-4bbb-91bb-decbb280945b-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.497538 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8976d94f-0a56-417c-9460-885a2d7f0155-serving-cert\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.498518 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.499061 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.502426 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/97b101ad-fe48-408d-8965-af78f6b66e12-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-bz7k2\" (UID: \"97b101ad-fe48-408d-8965-af78f6b66e12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.511275 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.526644 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-rlnz2"] Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.530200 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.538656 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-bmg84"] Dec 08 19:31:01 crc kubenswrapper[5120]: W1208 19:31:01.545485 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d6c9e9e_2924_4940_baca_0d24615c9513.slice/crio-7fb37c35f38964808a528f44a97a10880bcedf41ce47b0b7bcde0330989f29ed WatchSource:0}: Error finding container 7fb37c35f38964808a528f44a97a10880bcedf41ce47b0b7bcde0330989f29ed: Status 404 returned error can't find the container with id 7fb37c35f38964808a528f44a97a10880bcedf41ce47b0b7bcde0330989f29ed Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.550944 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.570720 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.588476 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.588645 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.588684 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-config\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.588741 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.088700339 +0000 UTC m=+114.760806998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.588796 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d29bd20f-dc09-4486-84f2-4debbc6f931f-signing-cabundle\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.588845 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trjml\" (UniqueName: \"kubernetes.io/projected/8e207158-b72d-4f6a-9e10-6647b59b0cf1-kube-api-access-trjml\") pod \"control-plane-machine-set-operator-75ffdb6fcd-sf9b8\" (UID: \"8e207158-b72d-4f6a-9e10-6647b59b0cf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.588873 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be87d8a1-7626-450d-98af-e9b2bdbf91a1-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.588910 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76f020ff-36ee-4661-a02f-9fb3f5a504ac-tmp\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.589400 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-config\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590534 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590636 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/25a0f226-35a2-4d2a-bae8-caa664a3f12f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590700 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590729 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-stats-auth\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590756 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-socket-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590783 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mmh\" (UniqueName: \"kubernetes.io/projected/bdfdfbe7-994c-4e98-ac93-9627b4264429-kube-api-access-l6mmh\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590840 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/17257e16-3a91-4dec-b45e-bc409c0c9a09-tmp-dir\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590865 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6688fee1-c5c8-4299-a2d3-5933a57d2099-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590900 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-webhook-cert\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590925 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b54r2\" (UniqueName: \"kubernetes.io/projected/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-kube-api-access-b54r2\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590952 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5vdj2\" (UniqueName: \"kubernetes.io/projected/1d77b19d-39e0-468f-b4b4-63ec407092de-kube-api-access-5vdj2\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.590987 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17257e16-3a91-4dec-b45e-bc409c0c9a09-metrics-tls\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591013 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/157ad135-aa3e-4834-adc9-c5c417319d33-tmp-dir\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591046 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-node-bootstrap-token\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591072 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9gmbg\" (UniqueName: \"kubernetes.io/projected/92f9c73f-321e-4610-9cc8-cf819293369d-kube-api-access-9gmbg\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591099 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6688fee1-c5c8-4299-a2d3-5933a57d2099-srv-cert\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591121 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/25a0f226-35a2-4d2a-bae8-caa664a3f12f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591197 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d221c560-78a6-47ae-a3da-e0a6bd649e8b-serving-cert\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591240 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-config-volume\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591286 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzk44\" (UniqueName: \"kubernetes.io/projected/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-kube-api-access-bzk44\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591311 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1d77b19d-39e0-468f-b4b4-63ec407092de-ready\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591363 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-config\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591386 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e207158-b72d-4f6a-9e10-6647b59b0cf1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-sf9b8\" (UID: \"8e207158-b72d-4f6a-9e10-6647b59b0cf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591417 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-tmpfs\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591440 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hlmqv\" (UniqueName: \"kubernetes.io/projected/c7cd1071-0374-4bcd-b58b-2614dba70805-kube-api-access-hlmqv\") pod \"package-server-manager-77f986bd66-dkxdr\" (UID: \"c7cd1071-0374-4bcd-b58b-2614dba70805\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591463 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5307c139-e352-4fcf-97e2-07e71a2e40ed-webhook-certs\") pod \"multus-admission-controller-69db94689b-r96t4\" (UID: \"5307c139-e352-4fcf-97e2-07e71a2e40ed\") " pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591491 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70b549b-1501-43b7-9b26-501ab5e58cf5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-metrics-certs\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591541 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-secret-volume\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591572 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fc709ff6-80b7-4208-a534-3311e895e710-images\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591598 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-service-ca\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591628 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-plugins-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591666 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zjv5\" (UniqueName: \"kubernetes.io/projected/17257e16-3a91-4dec-b45e-bc409c0c9a09-kube-api-access-6zjv5\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591690 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92f9c73f-321e-4610-9cc8-cf819293369d-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591716 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d70b549b-1501-43b7-9b26-501ab5e58cf5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591741 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be87d8a1-7626-450d-98af-e9b2bdbf91a1-config\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591797 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a0f226-35a2-4d2a-bae8-caa664a3f12f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591830 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6688fee1-c5c8-4299-a2d3-5933a57d2099-tmpfs\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591858 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc709ff6-80b7-4208-a534-3311e895e710-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591895 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591915 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a0f226-35a2-4d2a-bae8-caa664a3f12f-config\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591939 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-ca\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591959 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/312d44e5-0f47-4c8c-a43d-5dba1a9434fc-cert\") pod \"ingress-canary-q9j44\" (UID: \"312d44e5-0f47-4c8c-a43d-5dba1a9434fc\") " pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.591989 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-974m8\" (UniqueName: \"kubernetes.io/projected/1a0ac583-3330-481c-80b9-a58ab58c4786-kube-api-access-974m8\") pod \"migrator-866fcbc849-9zgcs\" (UID: \"1a0ac583-3330-481c-80b9-a58ab58c4786\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.592010 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-74btt\" (UniqueName: \"kubernetes.io/projected/2d163a97-86f5-4aa7-8013-c8f6a860724c-kube-api-access-74btt\") pod \"downloads-747b44746d-x44k9\" (UID: \"2d163a97-86f5-4aa7-8013-c8f6a860724c\") " pod="openshift-console/downloads-747b44746d-x44k9" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.592040 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7cd1071-0374-4bcd-b58b-2614dba70805-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-dkxdr\" (UID: \"c7cd1071-0374-4bcd-b58b-2614dba70805\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.592067 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjbz\" (UniqueName: \"kubernetes.io/projected/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-kube-api-access-lpjbz\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.592092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92f9c73f-321e-4610-9cc8-cf819293369d-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.592119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhjwb\" (UniqueName: \"kubernetes.io/projected/7e253e29-6c77-46b2-94a2-b75a825444f0-kube-api-access-lhjwb\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.592149 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-serving-cert\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.592190 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wfsc8\" (UniqueName: \"kubernetes.io/projected/d221c560-78a6-47ae-a3da-e0a6bd649e8b-kube-api-access-wfsc8\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.594717 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-socket-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.595504 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/157ad135-aa3e-4834-adc9-c5c417319d33-tmp-dir\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.596019 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.095987 +0000 UTC m=+114.768093649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596068 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d221c560-78a6-47ae-a3da-e0a6bd649e8b-serving-cert\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596133 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j8jwz\" (UniqueName: \"kubernetes.io/projected/76f020ff-36ee-4661-a02f-9fb3f5a504ac-kube-api-access-j8jwz\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596186 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d29bd20f-dc09-4486-84f2-4debbc6f931f-signing-key\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596217 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrjc\" (UniqueName: \"kubernetes.io/projected/fc709ff6-80b7-4208-a534-3311e895e710-kube-api-access-dgrjc\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596240 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-client\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596265 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d70b549b-1501-43b7-9b26-501ab5e58cf5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596294 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc709ff6-80b7-4208-a534-3311e895e710-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596331 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nwphz\" (UniqueName: \"kubernetes.io/projected/be87d8a1-7626-450d-98af-e9b2bdbf91a1-kube-api-access-nwphz\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596386 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6tp\" (UniqueName: \"kubernetes.io/projected/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-kube-api-access-sc6tp\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596413 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-registration-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596450 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mcp6f\" (UniqueName: \"kubernetes.io/projected/6688fee1-c5c8-4299-a2d3-5933a57d2099-kube-api-access-mcp6f\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596473 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7e253e29-6c77-46b2-94a2-b75a825444f0-tmpfs\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596495 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596516 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7e253e29-6c77-46b2-94a2-b75a825444f0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a0f226-35a2-4d2a-bae8-caa664a3f12f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596577 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157ad135-aa3e-4834-adc9-c5c417319d33-serving-cert\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-apiservice-cert\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596634 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fgllw\" (UniqueName: \"kubernetes.io/projected/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-kube-api-access-fgllw\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596659 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-csi-data-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596679 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7lvb8\" (UniqueName: \"kubernetes.io/projected/4c495775-98b3-40e4-a984-75b3a9b6209c-kube-api-access-7lvb8\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596701 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7e253e29-6c77-46b2-94a2-b75a825444f0-srv-cert\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596735 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17257e16-3a91-4dec-b45e-bc409c0c9a09-config-volume\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596760 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sfzg8\" (UniqueName: \"kubernetes.io/projected/d29bd20f-dc09-4486-84f2-4debbc6f931f-kube-api-access-sfzg8\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.596940 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.597309 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7e253e29-6c77-46b2-94a2-b75a825444f0-tmpfs\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.597356 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-csi-data-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.598511 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/17257e16-3a91-4dec-b45e-bc409c0c9a09-tmp-dir\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.598515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6688fee1-c5c8-4299-a2d3-5933a57d2099-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600101 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6688fee1-c5c8-4299-a2d3-5933a57d2099-srv-cert\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1d77b19d-39e0-468f-b4b4-63ec407092de-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600686 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-certs\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600717 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d221c560-78a6-47ae-a3da-e0a6bd649e8b-available-featuregates\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bqzsm\" (UniqueName: \"kubernetes.io/projected/157ad135-aa3e-4834-adc9-c5c417319d33-kube-api-access-bqzsm\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600774 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d70b549b-1501-43b7-9b26-501ab5e58cf5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600816 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-config\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.600959 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wz74b\" (UniqueName: \"kubernetes.io/projected/5307c139-e352-4fcf-97e2-07e71a2e40ed-kube-api-access-wz74b\") pod \"multus-admission-controller-69db94689b-r96t4\" (UID: \"5307c139-e352-4fcf-97e2-07e71a2e40ed\") " pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601018 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvprs\" (UniqueName: \"kubernetes.io/projected/312d44e5-0f47-4c8c-a43d-5dba1a9434fc-kube-api-access-wvprs\") pod \"ingress-canary-q9j44\" (UID: \"312d44e5-0f47-4c8c-a43d-5dba1a9434fc\") " pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601049 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-mountpoint-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601082 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-default-certificate\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601111 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c495775-98b3-40e4-a984-75b3a9b6209c-service-ca-bundle\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601153 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1d77b19d-39e0-468f-b4b4-63ec407092de-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601204 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601553 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-config\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601737 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76f020ff-36ee-4661-a02f-9fb3f5a504ac-tmp\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.601948 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.602400 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1d77b19d-39e0-468f-b4b4-63ec407092de-ready\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.602701 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-tmpfs\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.602760 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-config\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.603196 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-registration-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.603708 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70b549b-1501-43b7-9b26-501ab5e58cf5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.604468 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7e253e29-6c77-46b2-94a2-b75a825444f0-srv-cert\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.604622 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92f9c73f-321e-4610-9cc8-cf819293369d-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.604866 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-plugins-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.605021 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bdfdfbe7-994c-4e98-ac93-9627b4264429-mountpoint-dir\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.605106 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-apiservice-cert\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.605393 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157ad135-aa3e-4834-adc9-c5c417319d33-serving-cert\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.605631 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1d77b19d-39e0-468f-b4b4-63ec407092de-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.606322 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-service-ca\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.606971 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-ca\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.606977 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d221c560-78a6-47ae-a3da-e0a6bd649e8b-available-featuregates\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.607070 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d70b549b-1501-43b7-9b26-501ab5e58cf5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.607478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6688fee1-c5c8-4299-a2d3-5933a57d2099-tmpfs\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.608080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/157ad135-aa3e-4834-adc9-c5c417319d33-etcd-client\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.608277 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc709ff6-80b7-4208-a534-3311e895e710-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.608513 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-webhook-cert\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.608750 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.610283 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.610481 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-secret-volume\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.613478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d70b549b-1501-43b7-9b26-501ab5e58cf5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.614876 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a0f226-35a2-4d2a-bae8-caa664a3f12f-config\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.615371 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7e253e29-6c77-46b2-94a2-b75a825444f0-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.615473 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a0f226-35a2-4d2a-bae8-caa664a3f12f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.617108 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-serving-cert\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.622850 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d29bd20f-dc09-4486-84f2-4debbc6f931f-signing-key\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.633522 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.634472 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d29bd20f-dc09-4486-84f2-4debbc6f931f-signing-cabundle\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.650445 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.659331 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.659363 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.659496 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.670712 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.688509 5120 request.go:752] "Waited before sending request" delay="1.007245288s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.706672 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.711548 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.719635 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.720444 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.220402892 +0000 UTC m=+114.892509541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.720552 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.720973 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.22096525 +0000 UTC m=+114.893071899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.721796 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7cd1071-0374-4bcd-b58b-2614dba70805-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-dkxdr\" (UID: \"c7cd1071-0374-4bcd-b58b-2614dba70805\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.729960 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.751378 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.777501 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.790018 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.794936 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7"] Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.810450 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.820473 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92f9c73f-321e-4610-9cc8-cf819293369d-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.822444 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.822589 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.322569074 +0000 UTC m=+114.994675723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.822661 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.822984 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.322972957 +0000 UTC m=+114.995079606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.829561 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.850863 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.856564 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e207158-b72d-4f6a-9e10-6647b59b0cf1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-sf9b8\" (UID: \"8e207158-b72d-4f6a-9e10-6647b59b0cf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.870432 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.878774 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be87d8a1-7626-450d-98af-e9b2bdbf91a1-config\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.890791 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.909716 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.924410 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.924559 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.424533799 +0000 UTC m=+115.096640448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.924720 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be87d8a1-7626-450d-98af-e9b2bdbf91a1-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.924954 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:01 crc kubenswrapper[5120]: E1208 19:31:01.925513 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.425489928 +0000 UTC m=+115.097596617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.930801 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.949908 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.969840 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 19:31:01 crc kubenswrapper[5120]: I1208 19:31:01.989767 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.015717 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.020672 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.026801 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.026956 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.526935287 +0000 UTC m=+115.199041936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.027361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.027675 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.52765767 +0000 UTC m=+115.199764319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.029927 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.042230 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.049773 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.072313 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.092268 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.103852 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-config-volume\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.110089 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.129098 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.129286 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.629264273 +0000 UTC m=+115.301370932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.129815 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.130211 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.630190113 +0000 UTC m=+115.302296842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.130899 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.140459 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-default-certificate\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.150267 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.161392 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-stats-auth\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.170245 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.178252 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c495775-98b3-40e4-a984-75b3a9b6209c-service-ca-bundle\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.190861 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.210795 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.211466 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" event={"ID":"7d6c9e9e-2924-4940-baca-0d24615c9513","Type":"ContainerStarted","Data":"45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.211504 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" event={"ID":"7d6c9e9e-2924-4940-baca-0d24615c9513","Type":"ContainerStarted","Data":"7fb37c35f38964808a528f44a97a10880bcedf41ce47b0b7bcde0330989f29ed"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.212297 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.213133 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" event={"ID":"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9","Type":"ContainerStarted","Data":"9743a4a5fdc92db0cc48a0e7a757e94198035548cb0a6df96d1be157e79bf32d"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.215236 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" event={"ID":"1d19622f-cb5b-4ce7-85be-594da032f286","Type":"ContainerStarted","Data":"21c6b54ad82276fa2c2f3d09674d4fd2e3fef3090eb0f17e893e9490178c8ec9"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.215272 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" event={"ID":"1d19622f-cb5b-4ce7-85be-594da032f286","Type":"ContainerStarted","Data":"f638addb757cb29c790d8ec8d5315fa6b3ba1adc97a152800ad952032cf2153a"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.215286 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" event={"ID":"1d19622f-cb5b-4ce7-85be-594da032f286","Type":"ContainerStarted","Data":"b9f0be76a401fe428c6055abab54b855a801e5b580c1a2910076af486b3e06c8"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.216391 5120 generic.go:358] "Generic (PLEG): container finished" podID="184dedef-1313-4bdf-ba4b-d4e022e05b81" containerID="d0fa28edf61386c1cafe68cd1e7c9bed202c89c786d3ada3d875cae4c036f250" exitCode=0 Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.216430 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" event={"ID":"184dedef-1313-4bdf-ba4b-d4e022e05b81","Type":"ContainerDied","Data":"d0fa28edf61386c1cafe68cd1e7c9bed202c89c786d3ada3d875cae4c036f250"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.216444 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" event={"ID":"184dedef-1313-4bdf-ba4b-d4e022e05b81","Type":"ContainerStarted","Data":"6a27fc8952833d50f5c7536bd86bc54f6e58bea56c19b3a7ace0148677c64be8"} Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.232296 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.232948 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.732925872 +0000 UTC m=+115.405032521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.233412 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.236778 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c495775-98b3-40e4-a984-75b3a9b6209c-metrics-certs\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.252428 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.270012 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.273849 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fc709ff6-80b7-4208-a534-3311e895e710-images\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.290605 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.301663 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc709ff6-80b7-4208-a534-3311e895e710-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.311306 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.320746 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5307c139-e352-4fcf-97e2-07e71a2e40ed-webhook-certs\") pod \"multus-admission-controller-69db94689b-r96t4\" (UID: \"5307c139-e352-4fcf-97e2-07e71a2e40ed\") " pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.329487 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.335041 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.337190 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.837153218 +0000 UTC m=+115.509259867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.350545 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.369394 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.379074 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/17257e16-3a91-4dec-b45e-bc409c0c9a09-metrics-tls\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.390675 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.397181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17257e16-3a91-4dec-b45e-bc409c0c9a09-config-volume\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.410203 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.417499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1d77b19d-39e0-468f-b4b4-63ec407092de-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.429747 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.435831 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.436519 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.936505171 +0000 UTC m=+115.608611820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.440838 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/312d44e5-0f47-4c8c-a43d-5dba1a9434fc-cert\") pod \"ingress-canary-q9j44\" (UID: \"312d44e5-0f47-4c8c-a43d-5dba1a9434fc\") " pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.450691 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.470846 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.490467 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.512416 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.531061 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.537914 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.538610 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.038592959 +0000 UTC m=+115.710699608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.544214 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-certs\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.551886 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.559414 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-node-bootstrap-token\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.570221 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.581446 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41580: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.591895 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.609641 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.614305 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41588: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.639889 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.640052 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.140028228 +0000 UTC m=+115.812134877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.640499 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.640924 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.140906536 +0000 UTC m=+115.813013265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.642794 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41594: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.667948 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a1d6f36-c28b-4eea-b32e-37557479492e-kube-api-access\") pod \"kube-apiserver-operator-575994946d-f8j62\" (UID: \"4a1d6f36-c28b-4eea-b32e-37557479492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.681201 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41610: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.684833 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzsbn\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-kube-api-access-wzsbn\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.704911 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swdk\" (UniqueName: \"kubernetes.io/projected/74731f18-a532-487c-b679-3d850acf1edd-kube-api-access-5swdk\") pod \"oauth-openshift-66458b6674-rbgvm\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.708601 5120 request.go:752] "Waited before sending request" delay="1.229733222s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.726101 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55nxq\" (UniqueName: \"kubernetes.io/projected/6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7-kube-api-access-55nxq\") pod \"apiserver-8596bd845d-2wgnv\" (UID: \"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.741589 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.741982 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.241956452 +0000 UTC m=+115.914063101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.742273 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.742727 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.242704525 +0000 UTC m=+115.914811174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.747345 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzxfp\" (UniqueName: \"kubernetes.io/projected/4b303839-3284-4b71-b006-360460533813-kube-api-access-bzxfp\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.762570 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.768896 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k86bk\" (UniqueName: \"kubernetes.io/projected/29788dd4-fad8-42a9-8e4b-1fc7fc16d904-kube-api-access-k86bk\") pod \"openshift-apiserver-operator-846cbfc458-xfdqr\" (UID: \"29788dd4-fad8-42a9-8e4b-1fc7fc16d904\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.781634 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41620: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.786180 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvxh\" (UniqueName: \"kubernetes.io/projected/3d98f7b7-d51d-44f4-a84c-edfae10c5964-kube-api-access-wvvxh\") pod \"authentication-operator-7f5c659b84-m8qnz\" (UID: \"3d98f7b7-d51d-44f4-a84c-edfae10c5964\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.799868 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41626: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.806675 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.813577 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqfs\" (UniqueName: \"kubernetes.io/projected/977ffd05-e876-4f78-95f6-80c1e31b71c3-kube-api-access-9tqfs\") pod \"console-operator-67c89758df-5gcj2\" (UID: \"977ffd05-e876-4f78-95f6-80c1e31b71c3\") " pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.826965 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbz28\" (UniqueName: \"kubernetes.io/projected/fdc330d9-fcf6-4bbb-91bb-decbb280945b-kube-api-access-mbz28\") pod \"machine-api-operator-755bb95488-xdqgf\" (UID: \"fdc330d9-fcf6-4bbb-91bb-decbb280945b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.831607 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.843999 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.844538 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.344514815 +0000 UTC m=+116.016621464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.849425 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b303839-3284-4b71-b006-360460533813-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-j6kgc\" (UID: \"4b303839-3284-4b71-b006-360460533813\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.858475 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.870788 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb8bx\" (UniqueName: \"kubernetes.io/projected/53280245-e8bc-4afc-805c-95ed67a48227-kube-api-access-cb8bx\") pod \"dns-operator-799b87ffcd-l8lnb\" (UID: \"53280245-e8bc-4afc-805c-95ed67a48227\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.885146 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41642: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.894731 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l59h\" (UniqueName: \"kubernetes.io/projected/8976d94f-0a56-417c-9460-885a2d7f0155-kube-api-access-4l59h\") pod \"route-controller-manager-776cdc94d6-hqj5k\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.909807 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.920402 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfv2f\" (UniqueName: \"kubernetes.io/projected/97b101ad-fe48-408d-8965-af78f6b66e12-kube-api-access-wfv2f\") pod \"cluster-samples-operator-6b564684c8-bz7k2\" (UID: \"97b101ad-fe48-408d-8965-af78f6b66e12\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.943884 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-bound-sa-token\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.945647 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:02 crc kubenswrapper[5120]: E1208 19:31:02.946148 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.446133149 +0000 UTC m=+116.118239798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.947332 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xr47\" (UniqueName: \"kubernetes.io/projected/1fb4b841-4488-42e3-9fc7-2062a0a5c7a8-kube-api-access-2xr47\") pod \"console-64d44f6ddf-qrz2m\" (UID: \"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8\") " pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.966396 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.970320 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b54r2\" (UniqueName: \"kubernetes.io/projected/7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48-kube-api-access-b54r2\") pod \"machine-config-server-d42vt\" (UID: \"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48\") " pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.972360 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv"] Dec 08 19:31:02 crc kubenswrapper[5120]: I1208 19:31:02.996389 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mmh\" (UniqueName: \"kubernetes.io/projected/bdfdfbe7-994c-4e98-ac93-9627b4264429-kube-api-access-l6mmh\") pod \"csi-hostpathplugin-tjxf5\" (UID: \"bdfdfbe7-994c-4e98-ac93-9627b4264429\") " pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:03 crc kubenswrapper[5120]: W1208 19:31:03.008407 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c94f6bb_7e52_4b71_b8fd_6fcdb603f9d7.slice/crio-d08029ea12659967ef9f3c982a759baed73903897a1184bfbc8a6f23459bc43b WatchSource:0}: Error finding container d08029ea12659967ef9f3c982a759baed73903897a1184bfbc8a6f23459bc43b: Status 404 returned error can't find the container with id d08029ea12659967ef9f3c982a759baed73903897a1184bfbc8a6f23459bc43b Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.024495 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d42vt" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.027946 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vdj2\" (UniqueName: \"kubernetes.io/projected/1d77b19d-39e0-468f-b4b4-63ec407092de-kube-api-access-5vdj2\") pod \"cni-sysctl-allowlist-ds-bfzmj\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.031999 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.032511 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trjml\" (UniqueName: \"kubernetes.io/projected/8e207158-b72d-4f6a-9e10-6647b59b0cf1-kube-api-access-trjml\") pod \"control-plane-machine-set-operator-75ffdb6fcd-sf9b8\" (UID: \"8e207158-b72d-4f6a-9e10-6647b59b0cf1\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.041700 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.046679 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gmbg\" (UniqueName: \"kubernetes.io/projected/92f9c73f-321e-4610-9cc8-cf819293369d-kube-api-access-9gmbg\") pod \"machine-config-controller-f9cdd68f7-n7n7g\" (UID: \"92f9c73f-321e-4610-9cc8-cf819293369d\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.048347 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.048683 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.048884 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.548864888 +0000 UTC m=+116.220971537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.054959 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-rbgvm"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.069208 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.070929 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lvb8\" (UniqueName: \"kubernetes.io/projected/4c495775-98b3-40e4-a984-75b3a9b6209c-kube-api-access-7lvb8\") pod \"router-default-68cf44c8b8-bfw55\" (UID: \"4c495775-98b3-40e4-a984-75b3a9b6209c\") " pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.078740 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.085329 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.091191 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8jwz\" (UniqueName: \"kubernetes.io/projected/76f020ff-36ee-4661-a02f-9fb3f5a504ac-kube-api-access-j8jwz\") pod \"marketplace-operator-547dbd544d-htdxf\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.113362 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.119728 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgllw\" (UniqueName: \"kubernetes.io/projected/aa8a1fd7-19b4-4bd7-b070-7132f4770f7f-kube-api-access-fgllw\") pod \"openshift-controller-manager-operator-686468bdd5-4g5zl\" (UID: \"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.135880 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.141637 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrjc\" (UniqueName: \"kubernetes.io/projected/fc709ff6-80b7-4208-a534-3311e895e710-kube-api-access-dgrjc\") pod \"machine-config-operator-67c9d58cbb-ttdsl\" (UID: \"fc709ff6-80b7-4208-a534-3311e895e710\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.151370 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.152103 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.152498 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.652480745 +0000 UTC m=+116.324587394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.157133 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwphz\" (UniqueName: \"kubernetes.io/projected/be87d8a1-7626-450d-98af-e9b2bdbf91a1-kube-api-access-nwphz\") pod \"kube-storage-version-migrator-operator-565b79b866-kxttk\" (UID: \"be87d8a1-7626-450d-98af-e9b2bdbf91a1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.168270 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-974m8\" (UniqueName: \"kubernetes.io/projected/1a0ac583-3330-481c-80b9-a58ab58c4786-kube-api-access-974m8\") pod \"migrator-866fcbc849-9zgcs\" (UID: \"1a0ac583-3330-481c-80b9-a58ab58c4786\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.191434 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-l8lnb"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.200976 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25a0f226-35a2-4d2a-bae8-caa664a3f12f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-bwgg5\" (UID: \"25a0f226-35a2-4d2a-bae8-caa664a3f12f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.216870 5120 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-bmg84 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.216924 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" podUID="7d6c9e9e-2924-4940-baca-0d24615c9513" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.228938 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlmqv\" (UniqueName: \"kubernetes.io/projected/c7cd1071-0374-4bcd-b58b-2614dba70805-kube-api-access-hlmqv\") pod \"package-server-manager-77f986bd66-dkxdr\" (UID: \"c7cd1071-0374-4bcd-b58b-2614dba70805\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.230214 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.235863 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzk44\" (UniqueName: \"kubernetes.io/projected/1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a-kube-api-access-bzk44\") pod \"packageserver-7d4fc7d867-p92n5\" (UID: \"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.240352 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.246718 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.250889 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6tp\" (UniqueName: \"kubernetes.io/projected/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-kube-api-access-sc6tp\") pod \"collect-profiles-29420370-cblk8\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.251754 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-5gcj2"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.257397 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.260534 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d42vt" event={"ID":"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48","Type":"ContainerStarted","Data":"e09cc97be738eb247957340f907947df640dd4414c7b6de890897eba395b3865"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.260973 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.261558 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.761538604 +0000 UTC m=+116.433645263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.261651 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.261846 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41646: no serving certificate available for the kubelet" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.270627 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhjwb\" (UniqueName: \"kubernetes.io/projected/7e253e29-6c77-46b2-94a2-b75a825444f0-kube-api-access-lhjwb\") pod \"olm-operator-5cdf44d969-8b45m\" (UID: \"7e253e29-6c77-46b2-94a2-b75a825444f0\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.270817 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.279474 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.282362 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.296920 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-74btt\" (UniqueName: \"kubernetes.io/projected/2d163a97-86f5-4aa7-8013-c8f6a860724c-kube-api-access-74btt\") pod \"downloads-747b44746d-x44k9\" (UID: \"2d163a97-86f5-4aa7-8013-c8f6a860724c\") " pod="openshift-console/downloads-747b44746d-x44k9" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.301334 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" event={"ID":"4a1d6f36-c28b-4eea-b32e-37557479492e","Type":"ContainerStarted","Data":"ae66bb52f64faf16b71601963185290ff6be06e0c7acbdd32d14883c93643380"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.306288 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.314614 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.314698 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjbz\" (UniqueName: \"kubernetes.io/projected/2b8e1b60-eeac-422d-a3ed-e9f67b318cf6-kube-api-access-lpjbz\") pod \"service-ca-operator-5b9c976747-t5fnb\" (UID: \"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.331670 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" event={"ID":"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7","Type":"ContainerStarted","Data":"d08029ea12659967ef9f3c982a759baed73903897a1184bfbc8a6f23459bc43b"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.342102 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zjv5\" (UniqueName: \"kubernetes.io/projected/17257e16-3a91-4dec-b45e-bc409c0c9a09-kube-api-access-6zjv5\") pod \"dns-default-sjvd8\" (UID: \"17257e16-3a91-4dec-b45e-bc409c0c9a09\") " pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.363759 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.364246 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.864230481 +0000 UTC m=+116.536337130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.367902 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz74b\" (UniqueName: \"kubernetes.io/projected/5307c139-e352-4fcf-97e2-07e71a2e40ed-kube-api-access-wz74b\") pod \"multus-admission-controller-69db94689b-r96t4\" (UID: \"5307c139-e352-4fcf-97e2-07e71a2e40ed\") " pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.381095 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" event={"ID":"53280245-e8bc-4afc-805c-95ed67a48227","Type":"ContainerStarted","Data":"b32d660598542f5b254a584a65e91c7f1ef09e867db60948325ea0d1f4f3c9ab"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.385038 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvprs\" (UniqueName: \"kubernetes.io/projected/312d44e5-0f47-4c8c-a43d-5dba1a9434fc-kube-api-access-wvprs\") pod \"ingress-canary-q9j44\" (UID: \"312d44e5-0f47-4c8c-a43d-5dba1a9434fc\") " pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.386046 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" event={"ID":"74731f18-a532-487c-b679-3d850acf1edd","Type":"ContainerStarted","Data":"8f5014b94aba870f63941b2dd3be46fca327b48c5e15a909dadfae5e288c60b2"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.392416 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.393050 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" event={"ID":"f9a01a39-6668-4d4b-92d6-1435fd9c1cf9","Type":"ContainerStarted","Data":"39db18872cee289eb3ec88c2b0b5a9ff61dfffd2fc75537d95fa9b2baf28f7c1"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.403244 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" event={"ID":"184dedef-1313-4bdf-ba4b-d4e022e05b81","Type":"ContainerStarted","Data":"76e62099a482e4ac08957a020374726c8bdf87a4095e7fd9bc3791519f0b2e7a"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.403286 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" event={"ID":"184dedef-1313-4bdf-ba4b-d4e022e05b81","Type":"ContainerStarted","Data":"a57139ce10cdd44928a264c1ac43cdb33ecd462532e34402bb77a7abe7805a92"} Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.406157 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.409688 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfsc8\" (UniqueName: \"kubernetes.io/projected/d221c560-78a6-47ae-a3da-e0a6bd649e8b-kube-api-access-wfsc8\") pod \"openshift-config-operator-5777786469-wt7pp\" (UID: \"d221c560-78a6-47ae-a3da-e0a6bd649e8b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.424022 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcp6f\" (UniqueName: \"kubernetes.io/projected/6688fee1-c5c8-4299-a2d3-5933a57d2099-kube-api-access-mcp6f\") pod \"catalog-operator-75ff9f647d-wjgnl\" (UID: \"6688fee1-c5c8-4299-a2d3-5933a57d2099\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.437077 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfzg8\" (UniqueName: \"kubernetes.io/projected/d29bd20f-dc09-4486-84f2-4debbc6f931f-kube-api-access-sfzg8\") pod \"service-ca-74545575db-tfvd8\" (UID: \"d29bd20f-dc09-4486-84f2-4debbc6f931f\") " pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.444615 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.456275 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqzsm\" (UniqueName: \"kubernetes.io/projected/157ad135-aa3e-4834-adc9-c5c417319d33-kube-api-access-bqzsm\") pod \"etcd-operator-69b85846b6-8bp5k\" (UID: \"157ad135-aa3e-4834-adc9-c5c417319d33\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.464474 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-x44k9" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.464663 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.464787 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.964762731 +0000 UTC m=+116.636869380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.464962 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.470685 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.970667697 +0000 UTC m=+116.642774346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.471834 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.477143 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.478543 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.480549 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d70b549b-1501-43b7-9b26-501ab5e58cf5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-xxvlp\" (UID: \"d70b549b-1501-43b7-9b26-501ab5e58cf5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.489499 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.490881 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.499557 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:03 crc kubenswrapper[5120]: W1208 19:31:03.508725 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c495775_98b3_40e4_a984_75b3a9b6209c.slice/crio-d5d2ce5c631ae2e107fc722394380c2e1bcad3c3b7e8c990b551289af4acb746 WatchSource:0}: Error finding container d5d2ce5c631ae2e107fc722394380c2e1bcad3c3b7e8c990b551289af4acb746: Status 404 returned error can't find the container with id d5d2ce5c631ae2e107fc722394380c2e1bcad3c3b7e8c990b551289af4acb746 Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.509092 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.513047 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.516993 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-tfvd8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.531898 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.532690 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.562102 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.563291 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-xdqgf"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.566777 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.567248 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.067228172 +0000 UTC m=+116.739334821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.590287 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.596883 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.601888 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.612504 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q9j44" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.629957 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.658731 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tjxf5"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.669090 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.670003 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.169986642 +0000 UTC m=+116.842093291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.738374 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.760288 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.770919 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.771507 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.271486332 +0000 UTC m=+116.943592981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: W1208 19:31:03.821110 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d77b19d_39e0_468f_b4b4_63ec407092de.slice/crio-5e145a60597d6f3014c340e9cece9535aa1bd1044922ad215d8fa7a647056b84 WatchSource:0}: Error finding container 5e145a60597d6f3014c340e9cece9535aa1bd1044922ad215d8fa7a647056b84: Status 404 returned error can't find the container with id 5e145a60597d6f3014c340e9cece9535aa1bd1044922ad215d8fa7a647056b84 Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.874585 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.875077 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.375064058 +0000 UTC m=+117.047170707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.908871 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.962076 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41656: no serving certificate available for the kubelet" Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.976247 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-qrz2m"] Dec 08 19:31:03 crc kubenswrapper[5120]: I1208 19:31:03.976699 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5120]: E1208 19:31:03.977244 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.477229479 +0000 UTC m=+117.149336128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.079291 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.080019 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.5800066 +0000 UTC m=+117.252113239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.085904 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl"] Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.180408 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.180865 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.680843649 +0000 UTC m=+117.352950298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.282418 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.282799 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.782783253 +0000 UTC m=+117.454889902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.372458 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" podStartSLOduration=97.37243205 podStartE2EDuration="1m37.37243205s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.357676974 +0000 UTC m=+117.029783643" watchObservedRunningTime="2025-12-08 19:31:04.37243205 +0000 UTC m=+117.044538689" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.404318 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.404945 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.904927875 +0000 UTC m=+117.577034524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.410990 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-p4lk6" podStartSLOduration=97.410976995 podStartE2EDuration="1m37.410976995s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.409920792 +0000 UTC m=+117.082027441" watchObservedRunningTime="2025-12-08 19:31:04.410976995 +0000 UTC m=+117.083083644" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.489067 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-r5zn7" podStartSLOduration=97.489048266 podStartE2EDuration="1m37.489048266s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.449472059 +0000 UTC m=+117.121578708" watchObservedRunningTime="2025-12-08 19:31:04.489048266 +0000 UTC m=+117.161154915" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.510559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.510830 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.010812883 +0000 UTC m=+117.682919532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.532781 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" event={"ID":"bdfdfbe7-994c-4e98-ac93-9627b4264429","Type":"ContainerStarted","Data":"514a3d7d4b26c6e2657a5e30fc6504563b5b6e5839d1499952420f55bf1cceb3"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.534997 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d42vt" event={"ID":"7a3b3c1c-b975-4efd-bdaf-a59ee50b1f48","Type":"ContainerStarted","Data":"a60e05fb9c13dade855a447a2cec42a61a18a73dc0e0207e0e07aaa73ac0cd6d"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.540477 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" event={"ID":"4a1d6f36-c28b-4eea-b32e-37557479492e","Type":"ContainerStarted","Data":"5f1e0bea71e858b04ab2fd5c95f7a1a64bdd7f124a9f403d68b859771381bffd"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.543940 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" event={"ID":"fc709ff6-80b7-4208-a534-3311e895e710","Type":"ContainerStarted","Data":"286da2691ed420a9abced53e85112de995b18cf87b70d5e920ae7d92a5638262"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.551313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" event={"ID":"fdc330d9-fcf6-4bbb-91bb-decbb280945b","Type":"ContainerStarted","Data":"fc6f780a805cba63eb9c81590206b1928897e607fb303b2471ce9ab575899d40"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.556395 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" event={"ID":"8976d94f-0a56-417c-9460-885a2d7f0155","Type":"ContainerStarted","Data":"713b7e33f849fc0c43e4bbd41b1be657fcdd67d20a56332ea8da8b3184315517"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.559455 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" event={"ID":"97b101ad-fe48-408d-8965-af78f6b66e12","Type":"ContainerStarted","Data":"71d33e2314f8d83eb149a89d1d55a272cd93d563fdd55fbbbd74c040dfcc1040"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.562955 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" event={"ID":"29788dd4-fad8-42a9-8e4b-1fc7fc16d904","Type":"ContainerStarted","Data":"ac7ed138280ca1071bde3ac4a84ba0c425720be83cb7195c217f3fab8df1c34f"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.562993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" event={"ID":"29788dd4-fad8-42a9-8e4b-1fc7fc16d904","Type":"ContainerStarted","Data":"82119b7151ad0747e28f2e34bc3dd455b13b2bd1fd5b78955d8c6524f447b57c"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.565969 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" event={"ID":"977ffd05-e876-4f78-95f6-80c1e31b71c3","Type":"ContainerStarted","Data":"b46039dbb14ec38c5a28e4e9b89389ff8322b5ecf5bc0d61450842d20816ae4e"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.566213 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" event={"ID":"977ffd05-e876-4f78-95f6-80c1e31b71c3","Type":"ContainerStarted","Data":"ffdb7d05b515ee73f320144f28473541377c76883e4c288369542a806f4fb243"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.566475 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.568640 5120 patch_prober.go:28] interesting pod/console-operator-67c89758df-5gcj2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.568685 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" podUID="977ffd05-e876-4f78-95f6-80c1e31b71c3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.571501 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" event={"ID":"74731f18-a532-487c-b679-3d850acf1edd","Type":"ContainerStarted","Data":"d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.572099 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.575492 5120 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-rbgvm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.575538 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" podUID="74731f18-a532-487c-b679-3d850acf1edd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.602779 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" event={"ID":"4c495775-98b3-40e4-a984-75b3a9b6209c","Type":"ContainerStarted","Data":"8da426896a346219d955c975ccc06802ec5c6b46d0dbcd057c26463c7ecaac8d"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.602822 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" event={"ID":"4c495775-98b3-40e4-a984-75b3a9b6209c","Type":"ContainerStarted","Data":"d5d2ce5c631ae2e107fc722394380c2e1bcad3c3b7e8c990b551289af4acb746"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.611228 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.612363 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.112343834 +0000 UTC m=+117.784450473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.612793 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-qrz2m" event={"ID":"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8","Type":"ContainerStarted","Data":"18b395f26b3e96c90aa9d6fe0afec4d078fd4a624b92f8c3995f5c8af5f2977a"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.618588 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" event={"ID":"1d77b19d-39e0-468f-b4b4-63ec407092de","Type":"ContainerStarted","Data":"5e145a60597d6f3014c340e9cece9535aa1bd1044922ad215d8fa7a647056b84"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.631557 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" podStartSLOduration=97.63154344 podStartE2EDuration="1m37.63154344s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.629897927 +0000 UTC m=+117.302004576" watchObservedRunningTime="2025-12-08 19:31:04.63154344 +0000 UTC m=+117.303650089" Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.655398 5120 generic.go:358] "Generic (PLEG): container finished" podID="6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7" containerID="7ef009a5d2e24d25e52c2628c70f783c40134a1f15ad6645f5fe45e893b9bf4c" exitCode=0 Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.657247 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" event={"ID":"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7","Type":"ContainerDied","Data":"7ef009a5d2e24d25e52c2628c70f783c40134a1f15ad6645f5fe45e893b9bf4c"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.681883 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" event={"ID":"4b303839-3284-4b71-b006-360460533813","Type":"ContainerStarted","Data":"94cd560e394aaea283fb0f7ce826be679aef7d514beb6a5ceb5a0b119ff3038a"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.684119 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" event={"ID":"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f","Type":"ContainerStarted","Data":"bc4f719588d7e5b1d7d3b2003d8b395be2858b215ff432454c61269f64dbb4eb"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.714145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.714738 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.214721752 +0000 UTC m=+117.886828401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.716283 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" event={"ID":"3d98f7b7-d51d-44f4-a84c-edfae10c5964","Type":"ContainerStarted","Data":"4fc6a6bdb6acf0f7587a0ebb36970685fc68f7c1c159cb76c2dd9cbb94371780"} Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.816051 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.817854 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.317837353 +0000 UTC m=+117.989944002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5120]: I1208 19:31:04.919191 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:04 crc kubenswrapper[5120]: E1208 19:31:04.919805 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.419781908 +0000 UTC m=+118.091888567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.022657 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.023312 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.52325633 +0000 UTC m=+118.195363159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.125773 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.134566 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.634539159 +0000 UTC m=+118.306645808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.239119 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.239636 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.739618041 +0000 UTC m=+118.411724690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.278718 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.306467 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41670: no serving certificate available for the kubelet" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.307909 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bfw55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:05 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 08 19:31:05 crc kubenswrapper[5120]: [+]process-running ok Dec 08 19:31:05 crc kubenswrapper[5120]: healthz check failed Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.308269 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" podUID="4c495775-98b3-40e4-a984-75b3a9b6209c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.330817 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.331554 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htdxf"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.339221 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.341079 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.341532 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.841519235 +0000 UTC m=+118.513625884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: W1208 19:31:05.347555 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8c1a78_0576_4aa3_b7c9_aaf8f420c659.slice/crio-c8f8e8502f0ae6b82e62f22f74308ec5831a8aebf1470219bb48b134e64fb528 WatchSource:0}: Error finding container c8f8e8502f0ae6b82e62f22f74308ec5831a8aebf1470219bb48b134e64fb528: Status 404 returned error can't find the container with id c8f8e8502f0ae6b82e62f22f74308ec5831a8aebf1470219bb48b134e64fb528 Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.375970 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.442051 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.442561 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.94253798 +0000 UTC m=+118.614644629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.489351 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-xfdqr" podStartSLOduration=98.489337885 podStartE2EDuration="1m38.489337885s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.453821726 +0000 UTC m=+118.125928385" watchObservedRunningTime="2025-12-08 19:31:05.489337885 +0000 UTC m=+118.161444534" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.491820 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.497104 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" podStartSLOduration=98.497088089 podStartE2EDuration="1m38.497088089s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.496513812 +0000 UTC m=+118.168620461" watchObservedRunningTime="2025-12-08 19:31:05.497088089 +0000 UTC m=+118.169194738" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.527903 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" podStartSLOduration=98.52789064 podStartE2EDuration="1m38.52789064s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.52690495 +0000 UTC m=+118.199011589" watchObservedRunningTime="2025-12-08 19:31:05.52789064 +0000 UTC m=+118.199997289" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.545109 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.545560 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.045544657 +0000 UTC m=+118.717651306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.569743 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" podStartSLOduration=98.569725819 podStartE2EDuration="1m38.569725819s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.567638004 +0000 UTC m=+118.239744653" watchObservedRunningTime="2025-12-08 19:31:05.569725819 +0000 UTC m=+118.241832468" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.587569 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-d42vt" podStartSLOduration=5.587553582 podStartE2EDuration="5.587553582s" podCreationTimestamp="2025-12-08 19:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.585477947 +0000 UTC m=+118.257584596" watchObservedRunningTime="2025-12-08 19:31:05.587553582 +0000 UTC m=+118.259660231" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.621250 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.626669 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m"] Dec 08 19:31:05 crc kubenswrapper[5120]: W1208 19:31:05.636377 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e253e29_6c77_46b2_94a2_b75a825444f0.slice/crio-533b199c41dd0ec5bd2b09d5ff81d662891077a611d11a2a2dc72e3779394373 WatchSource:0}: Error finding container 533b199c41dd0ec5bd2b09d5ff81d662891077a611d11a2a2dc72e3779394373: Status 404 returned error can't find the container with id 533b199c41dd0ec5bd2b09d5ff81d662891077a611d11a2a2dc72e3779394373 Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.646502 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.646848 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.146821341 +0000 UTC m=+118.818927990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.655256 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-f8j62" podStartSLOduration=98.655237406 podStartE2EDuration="1m38.655237406s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.634020428 +0000 UTC m=+118.306127077" watchObservedRunningTime="2025-12-08 19:31:05.655237406 +0000 UTC m=+118.327344055" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.657119 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-r96t4"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.659434 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-x44k9"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.676598 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" podStartSLOduration=98.676582439 podStartE2EDuration="1m38.676582439s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.67595573 +0000 UTC m=+118.348062379" watchObservedRunningTime="2025-12-08 19:31:05.676582439 +0000 UTC m=+118.348689088" Dec 08 19:31:05 crc kubenswrapper[5120]: W1208 19:31:05.731554 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d163a97_86f5_4aa7_8013_c8f6a860724c.slice/crio-a8ea5c234216020331518fa497986ec41e592d16c94b61a63e7e8c00c5020334 WatchSource:0}: Error finding container a8ea5c234216020331518fa497986ec41e592d16c94b61a63e7e8c00c5020334: Status 404 returned error can't find the container with id a8ea5c234216020331518fa497986ec41e592d16c94b61a63e7e8c00c5020334 Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.762553 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.762970 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.262952512 +0000 UTC m=+118.935059161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.798556 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-qrz2m" event={"ID":"1fb4b841-4488-42e3-9fc7-2062a0a5c7a8","Type":"ContainerStarted","Data":"c5ca708a49563c87e901defe959fdc50b33132c6bc3de1425f6ac3c290f0d2e7"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.802682 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" event={"ID":"1d77b19d-39e0-468f-b4b4-63ec407092de","Type":"ContainerStarted","Data":"7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.809056 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" event={"ID":"6c94f6bb-7e52-4b71-b8fd-6fcdb603f9d7","Type":"ContainerStarted","Data":"36ebccc861ef530414cecb76abc8db127578b3b4d47daa9307dc926cafc38419"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.822145 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" event={"ID":"4b303839-3284-4b71-b006-360460533813","Type":"ContainerStarted","Data":"fb262c181c1a1032364ff549e7dfe043598ae735bfa887f5fd914cf361ecfb08"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.824579 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" podStartSLOduration=98.824550814 podStartE2EDuration="1m38.824550814s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.706995267 +0000 UTC m=+118.379101916" watchObservedRunningTime="2025-12-08 19:31:05.824550814 +0000 UTC m=+118.496657463" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.827891 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-4g5zl" event={"ID":"aa8a1fd7-19b4-4bd7-b070-7132f4770f7f","Type":"ContainerStarted","Data":"26807f8f5c116f58b20cd4d114f6b82828c916e80d1c3e8c8d1beab503682b93"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.828867 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-tfvd8"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.831482 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.834221 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" event={"ID":"1a0ac583-3330-481c-80b9-a58ab58c4786","Type":"ContainerStarted","Data":"de9d642cda3a58c329e3d61bf77ef6963b7ab56b0eb0180fbce982c896e87e0c"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.839952 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-m8qnz" event={"ID":"3d98f7b7-d51d-44f4-a84c-edfae10c5964","Type":"ContainerStarted","Data":"4404227c487bd0be29d0a9e25bf68781f83a3dffc3a10249b8de909d88fcd50e"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.850448 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" event={"ID":"7e253e29-6c77-46b2-94a2-b75a825444f0","Type":"ContainerStarted","Data":"533b199c41dd0ec5bd2b09d5ff81d662891077a611d11a2a2dc72e3779394373"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.852148 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" event={"ID":"76f020ff-36ee-4661-a02f-9fb3f5a504ac","Type":"ContainerStarted","Data":"d898a2f2eb938ce5a399ed56fafa813920980502b67f73639b4faf774c629ab2"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.859530 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" event={"ID":"92f9c73f-321e-4610-9cc8-cf819293369d","Type":"ContainerStarted","Data":"146d1cb46bc47435552feb81354b99ed1e715ac7134563e30856e960c24c64a3"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.862949 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.863513 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.863861 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.363833893 +0000 UTC m=+119.035940542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.883444 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" event={"ID":"8e207158-b72d-4f6a-9e10-6647b59b0cf1","Type":"ContainerStarted","Data":"629606444cb50cf45cd807e24e3c7776fb6f2be7df520dd8e6e44bd71ff3c696"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.887041 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" event={"ID":"be87d8a1-7626-450d-98af-e9b2bdbf91a1","Type":"ContainerStarted","Data":"70e602cf5be634ce0f92dedbf86cff69b13ebba8acf6d1cdcf4ef8f461718460"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.888509 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" event={"ID":"fc709ff6-80b7-4208-a534-3311e895e710","Type":"ContainerStarted","Data":"9bab8c3f65ca575c659db1e5a57514c01c583ab81003fbb6c0ed96f71d7846d6"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.890289 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" event={"ID":"fdc330d9-fcf6-4bbb-91bb-decbb280945b","Type":"ContainerStarted","Data":"541bc8c8ecc4d944355d06d87218f96e5a1bc5d9720b9c578906a7b6854d6ac5"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.890317 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" event={"ID":"fdc330d9-fcf6-4bbb-91bb-decbb280945b","Type":"ContainerStarted","Data":"846202e8c9b778cc9b9a2c74bc8a38f5b4d3cd047509be08dca656978f1aa134"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.899755 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" event={"ID":"8976d94f-0a56-417c-9460-885a2d7f0155","Type":"ContainerStarted","Data":"53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c"} Dec 08 19:31:05 crc kubenswrapper[5120]: W1208 19:31:05.899840 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod157ad135_aa3e_4834_adc9_c5c417319d33.slice/crio-c22589e5d94301a6867f1f19d82dd3c3cdae4208cd35c9fe4fc76b2dac3aa795 WatchSource:0}: Error finding container c22589e5d94301a6867f1f19d82dd3c3cdae4208cd35c9fe4fc76b2dac3aa795: Status 404 returned error can't find the container with id c22589e5d94301a6867f1f19d82dd3c3cdae4208cd35c9fe4fc76b2dac3aa795 Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.904498 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" event={"ID":"97b101ad-fe48-408d-8965-af78f6b66e12","Type":"ContainerStarted","Data":"5a08837efcd4cb97dcc893b37a8e58a72c56451dc12f78c33659f487d20d7851"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.904551 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" event={"ID":"97b101ad-fe48-408d-8965-af78f6b66e12","Type":"ContainerStarted","Data":"868ff1d37b1429df6e3b5692a09c5bb63a77b3339d590ec7dfc6b34f9dfdefd6"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.907286 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" event={"ID":"53280245-e8bc-4afc-805c-95ed67a48227","Type":"ContainerStarted","Data":"48be97dbb2b933faa52801d003271fccb44877908e72ad197ce0536c66a343b6"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.907314 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" event={"ID":"53280245-e8bc-4afc-805c-95ed67a48227","Type":"ContainerStarted","Data":"b457667dce29c71fe4ad0ec3ef5252d40aab8288de68f5eecdcd75adb3b86a82"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.912513 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" event={"ID":"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659","Type":"ContainerStarted","Data":"c8f8e8502f0ae6b82e62f22f74308ec5831a8aebf1470219bb48b134e64fb528"} Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.925468 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sjvd8"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.933343 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.934917 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.951008 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.951074 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.957281 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.962093 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q9j44"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.967850 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wt7pp"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.971857 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.974500 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:05 crc kubenswrapper[5120]: E1208 19:31:05.975660 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.475647778 +0000 UTC m=+119.147754417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.988297 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr"] Dec 08 19:31:05 crc kubenswrapper[5120]: I1208 19:31:05.997501 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-l8lnb" podStartSLOduration=98.997481787 podStartE2EDuration="1m38.997481787s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.978517258 +0000 UTC m=+118.650623927" watchObservedRunningTime="2025-12-08 19:31:05.997481787 +0000 UTC m=+118.669588446" Dec 08 19:31:06 crc kubenswrapper[5120]: W1208 19:31:06.005304 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b8e1b60_eeac_422d_a3ed_e9f67b318cf6.slice/crio-3f604b67306275543339577c379d23c34281e4c38c451b334ec44863b4492738 WatchSource:0}: Error finding container 3f604b67306275543339577c379d23c34281e4c38c451b334ec44863b4492738: Status 404 returned error can't find the container with id 3f604b67306275543339577c379d23c34281e4c38c451b334ec44863b4492738 Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.018869 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-qrz2m" podStartSLOduration=99.018851811 podStartE2EDuration="1m39.018851811s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:06.009459954 +0000 UTC m=+118.681566603" watchObservedRunningTime="2025-12-08 19:31:06.018851811 +0000 UTC m=+118.690958460" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.031668 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" podStartSLOduration=99.031653774 podStartE2EDuration="1m39.031653774s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:06.030752776 +0000 UTC m=+118.702859425" watchObservedRunningTime="2025-12-08 19:31:06.031653774 +0000 UTC m=+118.703760423" Dec 08 19:31:06 crc kubenswrapper[5120]: W1208 19:31:06.051338 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312d44e5_0f47_4c8c_a43d_5dba1a9434fc.slice/crio-569ab9cf6ca674e3ec78889078770a91ea92a7a1957447ab86a200ea6501cc81 WatchSource:0}: Error finding container 569ab9cf6ca674e3ec78889078770a91ea92a7a1957447ab86a200ea6501cc81: Status 404 returned error can't find the container with id 569ab9cf6ca674e3ec78889078770a91ea92a7a1957447ab86a200ea6501cc81 Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.053214 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" podStartSLOduration=6.053201014 podStartE2EDuration="6.053201014s" podCreationTimestamp="2025-12-08 19:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:06.05183887 +0000 UTC m=+118.723945529" watchObservedRunningTime="2025-12-08 19:31:06.053201014 +0000 UTC m=+118.725307663" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.077272 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.077630 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.577611613 +0000 UTC m=+119.249718262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.079677 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-5gcj2" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.089108 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" podStartSLOduration=99.089090105 podStartE2EDuration="1m39.089090105s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:06.086439672 +0000 UTC m=+118.758546331" watchObservedRunningTime="2025-12-08 19:31:06.089090105 +0000 UTC m=+118.761196744" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.187656 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.208683 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-xdqgf" podStartSLOduration=99.208667365 podStartE2EDuration="1m39.208667365s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:06.206975452 +0000 UTC m=+118.879082101" watchObservedRunningTime="2025-12-08 19:31:06.208667365 +0000 UTC m=+118.880774014" Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.209327 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.709310166 +0000 UTC m=+119.381416815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.210267 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" podStartSLOduration=99.210256346 podStartE2EDuration="1m39.210256346s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:06.116646994 +0000 UTC m=+118.788753643" watchObservedRunningTime="2025-12-08 19:31:06.210256346 +0000 UTC m=+118.882363015" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.294193 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.294484 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.794467611 +0000 UTC m=+119.466574260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.295068 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bfw55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:06 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 08 19:31:06 crc kubenswrapper[5120]: [+]process-running ok Dec 08 19:31:06 crc kubenswrapper[5120]: healthz check failed Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.295114 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" podUID="4c495775-98b3-40e4-a984-75b3a9b6209c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.312635 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.313218 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.362471 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.417719 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.419236 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.919224254 +0000 UTC m=+119.591330903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.518787 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.518955 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.018922127 +0000 UTC m=+119.691028786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.519494 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.519838 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.019829896 +0000 UTC m=+119.691936545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.591082 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.620192 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.620485 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.120469279 +0000 UTC m=+119.792575928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.721521 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.721803 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.221788594 +0000 UTC m=+119.893895243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.822803 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.823015 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.322983115 +0000 UTC m=+119.995089764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.823928 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.824310 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.324299746 +0000 UTC m=+119.996406465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.922410 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" event={"ID":"c7cd1071-0374-4bcd-b58b-2614dba70805","Type":"ContainerStarted","Data":"cb4485eeb5bc488d847e21ea0559114bd743064701ef02b433cddb3e65c64497"} Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.924891 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" event={"ID":"4b303839-3284-4b71-b006-360460533813","Type":"ContainerStarted","Data":"e8b3bdf2094a7948122c317e09ab343f9838fc6fc6eab0b72e138cd8691b0620"} Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.927690 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.927841 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.42781711 +0000 UTC m=+120.099923779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.928358 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:06 crc kubenswrapper[5120]: E1208 19:31:06.928832 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.428822302 +0000 UTC m=+120.100928951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.935021 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-x44k9" event={"ID":"2d163a97-86f5-4aa7-8013-c8f6a860724c","Type":"ContainerStarted","Data":"a8ea5c234216020331518fa497986ec41e592d16c94b61a63e7e8c00c5020334"} Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.936473 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" event={"ID":"7e253e29-6c77-46b2-94a2-b75a825444f0","Type":"ContainerStarted","Data":"d40165242aa25d96077e1838f49145c52fa813a533eb7edd795416f6d33af2cd"} Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.938269 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.939797 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" event={"ID":"76f020ff-36ee-4661-a02f-9fb3f5a504ac","Type":"ContainerStarted","Data":"ee5d8822946afce0e50863b70cdf99e51f5a6d874e36ddd1e885cbd28a0110ee"} Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.940555 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.941438 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-htdxf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.941488 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.942736 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sjvd8" event={"ID":"17257e16-3a91-4dec-b45e-bc409c0c9a09","Type":"ContainerStarted","Data":"6af9f7cd3ac8062dc5e9731da2ab8415cfb683b0f07c0fad1e53e67554fcfabb"} Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.967717 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-j6kgc" podStartSLOduration=99.967699287 podStartE2EDuration="1m39.967699287s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:06.967010315 +0000 UTC m=+119.639116984" watchObservedRunningTime="2025-12-08 19:31:06.967699287 +0000 UTC m=+119.639805936" Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.982500 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" event={"ID":"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659","Type":"ContainerStarted","Data":"17350700dad4be183d81c8648ad8ca74d5986063fafad0c2aeeb3f6800ca8553"} Dec 08 19:31:06 crc kubenswrapper[5120]: I1208 19:31:06.984852 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.014712 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" event={"ID":"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a","Type":"ContainerStarted","Data":"320a9f04c31f2e2ca9f8f5662ff9189eb5a388b98dcbbe7dab5bf8cc6de3a620"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.015036 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" event={"ID":"1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a","Type":"ContainerStarted","Data":"0a88b4ac88f48186929009496771d5695991f339af307a3bc66973023757d850"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.015382 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.017840 5120 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-p92n5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.017915 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" podUID="1ca7d7fc-2170-4aca-a32c-b41e4d9dda2a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.029275 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" event={"ID":"1a0ac583-3330-481c-80b9-a58ab58c4786","Type":"ContainerStarted","Data":"076d445cb9d2107b1932d126b57e70199c6baf47ff23eaf369da944f1336c947"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.029369 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.029536 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.529503346 +0000 UTC m=+120.201609995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.029568 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8b45m" podStartSLOduration=100.029550087 podStartE2EDuration="1m40.029550087s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.001780022 +0000 UTC m=+119.673886671" watchObservedRunningTime="2025-12-08 19:31:07.029550087 +0000 UTC m=+119.701656736" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.030218 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.030985 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" podStartSLOduration=100.030971293 podStartE2EDuration="1m40.030971293s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.029960961 +0000 UTC m=+119.702067620" watchObservedRunningTime="2025-12-08 19:31:07.030971293 +0000 UTC m=+119.703077962" Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.032622 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.532609534 +0000 UTC m=+120.204716183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.034389 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q9j44" event={"ID":"312d44e5-0f47-4c8c-a43d-5dba1a9434fc","Type":"ContainerStarted","Data":"569ab9cf6ca674e3ec78889078770a91ea92a7a1957447ab86a200ea6501cc81"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.054026 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" event={"ID":"6688fee1-c5c8-4299-a2d3-5933a57d2099","Type":"ContainerStarted","Data":"51a143f17288e51503e53ed737bfc95b7fef8cacc284279ce9d7f23a52d48453"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.083749 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" podStartSLOduration=67.083731846 podStartE2EDuration="1m7.083731846s" podCreationTimestamp="2025-12-08 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.061879766 +0000 UTC m=+119.733986415" watchObservedRunningTime="2025-12-08 19:31:07.083731846 +0000 UTC m=+119.755838495" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.112424 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-tfvd8" event={"ID":"d29bd20f-dc09-4486-84f2-4debbc6f931f","Type":"ContainerStarted","Data":"58c0da2af8d4656c37172cf7820dc36e1cfd4ab592685a888d56d749b2ddecd0"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.112471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-tfvd8" event={"ID":"d29bd20f-dc09-4486-84f2-4debbc6f931f","Type":"ContainerStarted","Data":"1ab2e1bcce6fd0621984b1ea944880d263fc192ee3c3501b3832ff7e1482a623"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.115173 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" podStartSLOduration=100.115127455 podStartE2EDuration="1m40.115127455s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.113152633 +0000 UTC m=+119.785259302" watchObservedRunningTime="2025-12-08 19:31:07.115127455 +0000 UTC m=+119.787234104" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.124221 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" event={"ID":"d221c560-78a6-47ae-a3da-e0a6bd649e8b","Type":"ContainerStarted","Data":"4b6e68e66277aa72815c6a280336e2160ed4c8346f689d04c40c9e0388ed39ab"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.132769 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" event={"ID":"92f9c73f-321e-4610-9cc8-cf819293369d","Type":"ContainerStarted","Data":"ce2d3d9ed198659ff3b4bbaa172568c38d20b60a2ea3b546ffa9859febc82fa9"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.133229 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.133661 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.633628049 +0000 UTC m=+120.305734698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.148750 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-tfvd8" podStartSLOduration=100.148730595 podStartE2EDuration="1m40.148730595s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.147739764 +0000 UTC m=+119.819846413" watchObservedRunningTime="2025-12-08 19:31:07.148730595 +0000 UTC m=+119.820837244" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.163273 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" event={"ID":"8e207158-b72d-4f6a-9e10-6647b59b0cf1","Type":"ContainerStarted","Data":"8976dcc2712cc09bffcda5b04e9e487ff3a089cc3525cc1f6871a35aab5167b1"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.169385 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" event={"ID":"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6","Type":"ContainerStarted","Data":"3f604b67306275543339577c379d23c34281e4c38c451b334ec44863b4492738"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.170799 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" event={"ID":"25a0f226-35a2-4d2a-bae8-caa664a3f12f","Type":"ContainerStarted","Data":"ec78a308b9dfb7648e987acfec55f141334681ecda7db3e7eee9a5917c15a669"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.171924 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" event={"ID":"be87d8a1-7626-450d-98af-e9b2bdbf91a1","Type":"ContainerStarted","Data":"188ffb8305386ec87fafcc8b873c34732ed2d0a2f79edc9f3ab6f3101b2bed65"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.193583 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-sf9b8" podStartSLOduration=100.193561158 podStartE2EDuration="1m40.193561158s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.185605788 +0000 UTC m=+119.857712457" watchObservedRunningTime="2025-12-08 19:31:07.193561158 +0000 UTC m=+119.865667807" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.195409 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" event={"ID":"5307c139-e352-4fcf-97e2-07e71a2e40ed","Type":"ContainerStarted","Data":"72c54275607779c93059e8b525f8d833ab60504c569ed10e486d0b61f66bc18e"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.238141 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.241772 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.741748828 +0000 UTC m=+120.413855477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.289977 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" podStartSLOduration=100.289958808 podStartE2EDuration="1m40.289958808s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.23547432 +0000 UTC m=+119.907580969" watchObservedRunningTime="2025-12-08 19:31:07.289958808 +0000 UTC m=+119.962065457" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.301448 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bfw55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:07 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 08 19:31:07 crc kubenswrapper[5120]: [+]process-running ok Dec 08 19:31:07 crc kubenswrapper[5120]: healthz check failed Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.301537 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" podUID="4c495775-98b3-40e4-a984-75b3a9b6209c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.307717 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" event={"ID":"fc709ff6-80b7-4208-a534-3311e895e710","Type":"ContainerStarted","Data":"34d74de0336bb08931b02399da6464b61028e4b7eee4ffce551a36379844b2b7"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.336297 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-kxttk" podStartSLOduration=100.336279718 podStartE2EDuration="1m40.336279718s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.293539371 +0000 UTC m=+119.965646020" watchObservedRunningTime="2025-12-08 19:31:07.336279718 +0000 UTC m=+120.008386377" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.343552 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.343906 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.843890739 +0000 UTC m=+120.515997388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.352266 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" event={"ID":"157ad135-aa3e-4834-adc9-c5c417319d33","Type":"ContainerStarted","Data":"c22589e5d94301a6867f1f19d82dd3c3cdae4208cd35c9fe4fc76b2dac3aa795"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.359855 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" event={"ID":"d70b549b-1501-43b7-9b26-501ab5e58cf5","Type":"ContainerStarted","Data":"fd5e0d38412df786771f215d57c9070e44e3ca0c41651115070ac5897c1620cf"} Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.362050 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.370213 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-rlnz2" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.390840 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-ttdsl" podStartSLOduration=100.390823578 podStartE2EDuration="1m40.390823578s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:07.334491982 +0000 UTC m=+120.006598641" watchObservedRunningTime="2025-12-08 19:31:07.390823578 +0000 UTC m=+120.062930227" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.445474 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.452017 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.951999157 +0000 UTC m=+120.624105806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.488365 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.549136 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.549957 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.049936905 +0000 UTC m=+120.722043554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.599790 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bfzmj"] Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.653289 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.653702 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.153687246 +0000 UTC m=+120.825793895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.757744 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.758079 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.258027606 +0000 UTC m=+120.930134255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.758540 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.759419 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.259388509 +0000 UTC m=+120.931495158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.773762 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.774021 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.839974 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.860199 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.860835 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.360812207 +0000 UTC m=+121.032918856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.962675 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:07 crc kubenswrapper[5120]: E1208 19:31:07.963303 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.463288727 +0000 UTC m=+121.135395376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5120]: I1208 19:31:07.990663 5120 ???:1] "http: TLS handshake error from 192.168.126.11:41682: no serving certificate available for the kubelet" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.064862 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.067785 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.567759532 +0000 UTC m=+121.239866181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.169137 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.169424 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.669412447 +0000 UTC m=+121.341519096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.269894 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.270294 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.770278567 +0000 UTC m=+121.442385216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.277007 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-bfw55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:08 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 08 19:31:08 crc kubenswrapper[5120]: [+]process-running ok Dec 08 19:31:08 crc kubenswrapper[5120]: healthz check failed Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.277061 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" podUID="4c495775-98b3-40e4-a984-75b3a9b6209c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.371412 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.371696 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.871681585 +0000 UTC m=+121.543788234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.380874 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" event={"ID":"6688fee1-c5c8-4299-a2d3-5933a57d2099","Type":"ContainerStarted","Data":"c28855c73100c91dbb3b9927632f2de8e82f45f2da224159205dba86dfa6bedd"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.381455 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.389733 5120 generic.go:358] "Generic (PLEG): container finished" podID="d221c560-78a6-47ae-a3da-e0a6bd649e8b" containerID="9f5e2e224aa6e1bf0d382692f2773090c8ec649af09247f4d23866e3ffb774ec" exitCode=0 Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.390067 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" event={"ID":"d221c560-78a6-47ae-a3da-e0a6bd649e8b","Type":"ContainerDied","Data":"9f5e2e224aa6e1bf0d382692f2773090c8ec649af09247f4d23866e3ffb774ec"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.404362 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" event={"ID":"92f9c73f-321e-4610-9cc8-cf819293369d","Type":"ContainerStarted","Data":"7ab2b4bda5c23d8917c9dcdea4933ae13975beaeec3b8e7ca0579b309fdeb226"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.424810 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" podStartSLOduration=101.424792979 podStartE2EDuration="1m41.424792979s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.421580958 +0000 UTC m=+121.093687617" watchObservedRunningTime="2025-12-08 19:31:08.424792979 +0000 UTC m=+121.096899628" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.427995 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-t5fnb" event={"ID":"2b8e1b60-eeac-422d-a3ed-e9f67b318cf6","Type":"ContainerStarted","Data":"2480b28a4800efbf5348e03c0a1fd8138dfc679db5b7bcae553a04428cbe6565"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.447870 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" event={"ID":"25a0f226-35a2-4d2a-bae8-caa664a3f12f","Type":"ContainerStarted","Data":"c39c53d4e754dce5ea9a73ba473545c135d48c370d0dcab75176fa61ceef559b"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.461470 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" event={"ID":"5307c139-e352-4fcf-97e2-07e71a2e40ed","Type":"ContainerStarted","Data":"a8feed7da69090871cd503db92fa6b903507f1a265511b2342bf1cbe9a9de7f1"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.461512 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" event={"ID":"5307c139-e352-4fcf-97e2-07e71a2e40ed","Type":"ContainerStarted","Data":"7b53f6ea5b2387355f68434e4c3fe30de5258e9cdcc2af788a3a929b7ca9fa60"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.461545 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wjgnl" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.476433 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.478027 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.978012127 +0000 UTC m=+121.650118776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.482335 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" event={"ID":"157ad135-aa3e-4834-adc9-c5c417319d33","Type":"ContainerStarted","Data":"32df7bab0ce15fdc2627c7373cec547deab73c0af22b677f68a908556f515dd2"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.498066 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-n7n7g" podStartSLOduration=101.498052769 podStartE2EDuration="1m41.498052769s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.496347495 +0000 UTC m=+121.168454144" watchObservedRunningTime="2025-12-08 19:31:08.498052769 +0000 UTC m=+121.170159418" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.513396 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" event={"ID":"d70b549b-1501-43b7-9b26-501ab5e58cf5","Type":"ContainerStarted","Data":"687d97814e9d21b33a72d449704fb0e49f4656ec3ee67de1e0bc39adac4a38cb"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.533603 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-8bp5k" podStartSLOduration=101.53358954 podStartE2EDuration="1m41.53358954s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.532529236 +0000 UTC m=+121.204635885" watchObservedRunningTime="2025-12-08 19:31:08.53358954 +0000 UTC m=+121.205696189" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.546548 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" event={"ID":"c7cd1071-0374-4bcd-b58b-2614dba70805","Type":"ContainerStarted","Data":"5a4cac6495829879271985cb24d8f9515f1dacf5b3602266db13ec9b75dc2889"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.546606 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" event={"ID":"c7cd1071-0374-4bcd-b58b-2614dba70805","Type":"ContainerStarted","Data":"ecda26d511b4796eb48293a46f5c401567fe6f4c2d171d9cd09306cf7d41b7af"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.547499 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.583601 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.588776 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.088753489 +0000 UTC m=+121.760860138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.601793 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-x44k9" event={"ID":"2d163a97-86f5-4aa7-8013-c8f6a860724c","Type":"ContainerStarted","Data":"effd70fd3e63d121a069ac98273d3b7ee1830f93d362609177530d58cf29ed0c"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.602972 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-x44k9" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.622054 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-x44k9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.622517 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x44k9" podUID="2d163a97-86f5-4aa7-8013-c8f6a860724c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.634822 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sjvd8" event={"ID":"17257e16-3a91-4dec-b45e-bc409c0c9a09","Type":"ContainerStarted","Data":"2204f58d1cc43a27257fe83196a394d70f0c4ceb64d9484528083f091de08e14"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.635040 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sjvd8" event={"ID":"17257e16-3a91-4dec-b45e-bc409c0c9a09","Type":"ContainerStarted","Data":"4eba121e28f7dce17fae04dc2f3338e6c7e4784af2993ff7f9874a89f7ca0f45"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.635919 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.639356 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-r96t4" podStartSLOduration=101.639329203 podStartE2EDuration="1m41.639329203s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.581601924 +0000 UTC m=+121.253708583" watchObservedRunningTime="2025-12-08 19:31:08.639329203 +0000 UTC m=+121.311435852" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.644306 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" event={"ID":"1a0ac583-3330-481c-80b9-a58ab58c4786","Type":"ContainerStarted","Data":"049e44c43099686d689d678917bc3b92373e0e465b68d27c288902e7650be0bd"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.647024 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q9j44" event={"ID":"312d44e5-0f47-4c8c-a43d-5dba1a9434fc","Type":"ContainerStarted","Data":"c80bd476141449b97540e013b17acf2dd84151acc079a072e7dba84841de8f91"} Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.652623 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-htdxf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.652680 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.660705 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-bwgg5" podStartSLOduration=101.660685386 podStartE2EDuration="1m41.660685386s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.659635384 +0000 UTC m=+121.331742033" watchObservedRunningTime="2025-12-08 19:31:08.660685386 +0000 UTC m=+121.332792035" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.686255 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.687175 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.187140601 +0000 UTC m=+121.859247240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.706372 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-2wgnv" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.728894 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-x44k9" podStartSLOduration=101.728880317 podStartE2EDuration="1m41.728880317s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.726146281 +0000 UTC m=+121.398252930" watchObservedRunningTime="2025-12-08 19:31:08.728880317 +0000 UTC m=+121.400986966" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.763341 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" podStartSLOduration=101.763320423 podStartE2EDuration="1m41.763320423s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.761633829 +0000 UTC m=+121.433740478" watchObservedRunningTime="2025-12-08 19:31:08.763320423 +0000 UTC m=+121.435427072" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.787920 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.793395 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.293380161 +0000 UTC m=+121.965486810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.797191 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9zgcs" podStartSLOduration=101.79715525 podStartE2EDuration="1m41.79715525s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.795123495 +0000 UTC m=+121.467230144" watchObservedRunningTime="2025-12-08 19:31:08.79715525 +0000 UTC m=+121.469261899" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.836386 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-xxvlp" podStartSLOduration=101.836371476 podStartE2EDuration="1m41.836371476s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.835395265 +0000 UTC m=+121.507501914" watchObservedRunningTime="2025-12-08 19:31:08.836371476 +0000 UTC m=+121.508478125" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.874371 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sjvd8" podStartSLOduration=8.874352733 podStartE2EDuration="8.874352733s" podCreationTimestamp="2025-12-08 19:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.872589988 +0000 UTC m=+121.544696637" watchObservedRunningTime="2025-12-08 19:31:08.874352733 +0000 UTC m=+121.546459382" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.890871 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.891448 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.391432232 +0000 UTC m=+122.063538881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.950919 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q9j44" podStartSLOduration=8.950901917 podStartE2EDuration="8.950901917s" podCreationTimestamp="2025-12-08 19:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:08.949063649 +0000 UTC m=+121.621170298" watchObservedRunningTime="2025-12-08 19:31:08.950901917 +0000 UTC m=+121.623008566" Dec 08 19:31:08 crc kubenswrapper[5120]: I1208 19:31:08.993146 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:08 crc kubenswrapper[5120]: E1208 19:31:08.993457 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.493445449 +0000 UTC m=+122.165552098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.093989 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.094263 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.594249327 +0000 UTC m=+122.266355976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.195535 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.195907 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.695889611 +0000 UTC m=+122.367996260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.276893 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.278122 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.294918 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-bfw55" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.296571 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.296973 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.796954968 +0000 UTC m=+122.469061617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.371620 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-p92n5" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.398848 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.400151 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.900137851 +0000 UTC m=+122.572244500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.500340 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.500443 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.000422963 +0000 UTC m=+122.672529612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.500555 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.500824 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.000817205 +0000 UTC m=+122.672923854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.602601 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.602729 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.102709969 +0000 UTC m=+122.774816608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.603059 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.603474 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.103463613 +0000 UTC m=+122.775570262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.657012 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" event={"ID":"d221c560-78a6-47ae-a3da-e0a6bd649e8b","Type":"ContainerStarted","Data":"a609d6a08ce7d71addadd84b4bae6f977bcbb01cb9458a25bfd34574b90d8ccf"} Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.657088 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.662633 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-x44k9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.662689 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x44k9" podUID="2d163a97-86f5-4aa7-8013-c8f6a860724c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.664446 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" gracePeriod=30 Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.673335 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" event={"ID":"bdfdfbe7-994c-4e98-ac93-9627b4264429","Type":"ContainerStarted","Data":"81358597d065e203e0c243b80b20fbe5b9f42bdb3ea5338041f8f046d12ae9e6"} Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.675411 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.684544 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" podStartSLOduration=102.684529028 podStartE2EDuration="1m42.684529028s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:09.683579848 +0000 UTC m=+122.355686507" watchObservedRunningTime="2025-12-08 19:31:09.684529028 +0000 UTC m=+122.356635677" Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.708719 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.709081 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.209061592 +0000 UTC m=+122.881168241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.810265 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.812834 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.312784712 +0000 UTC m=+122.984891361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.911118 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.911233 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.411214606 +0000 UTC m=+123.083321255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5120]: I1208 19:31:09.911488 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:09 crc kubenswrapper[5120]: E1208 19:31:09.911873 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.411864965 +0000 UTC m=+123.083971614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.012661 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.013035 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.513007405 +0000 UTC m=+123.185114054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.049633 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wv6j"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.057573 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.060345 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.072980 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wv6j"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.115067 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.121678 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.621659741 +0000 UTC m=+123.293766390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.219843 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.220033 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.720002191 +0000 UTC m=+123.392108840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.220351 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvxc\" (UniqueName: \"kubernetes.io/projected/65e44f89-0e9d-46f9-a56b-7f01d1090930-kube-api-access-9xvxc\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.220464 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.220605 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-utilities\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.220731 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-catalog-content\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.220810 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.720803587 +0000 UTC m=+123.392910236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.241055 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5czqn"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.246849 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.249200 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.255394 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5czqn"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.322151 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.322300 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.822270916 +0000 UTC m=+123.494377565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.322707 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-catalog-content\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.322751 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvxc\" (UniqueName: \"kubernetes.io/projected/65e44f89-0e9d-46f9-a56b-7f01d1090930-kube-api-access-9xvxc\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.322791 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-utilities\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.322936 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.322992 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wjck\" (UniqueName: \"kubernetes.io/projected/8646edae-915b-459b-b385-491aaf3939ec-kube-api-access-4wjck\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.323070 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-utilities\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.323101 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-catalog-content\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.323461 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.823445112 +0000 UTC m=+123.495551751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.323576 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-catalog-content\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.323675 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-utilities\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.374497 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvxc\" (UniqueName: \"kubernetes.io/projected/65e44f89-0e9d-46f9-a56b-7f01d1090930-kube-api-access-9xvxc\") pod \"community-operators-7wv6j\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.423906 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.424100 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.924072936 +0000 UTC m=+123.596179585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.424265 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-catalog-content\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.424348 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-utilities\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.424417 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.424618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wjck\" (UniqueName: \"kubernetes.io/projected/8646edae-915b-459b-b385-491aaf3939ec-kube-api-access-4wjck\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.424684 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.924677324 +0000 UTC m=+123.596783973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.424717 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-catalog-content\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.424869 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-utilities\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.449345 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lfzbg"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.459070 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.477721 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lfzbg"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.478626 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wjck\" (UniqueName: \"kubernetes.io/projected/8646edae-915b-459b-b385-491aaf3939ec-kube-api-access-4wjck\") pod \"certified-operators-5czqn\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.525962 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.526349 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.026315199 +0000 UTC m=+123.698421848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.526745 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.527075 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.027063593 +0000 UTC m=+123.699170242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.559837 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.627911 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.628226 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.128190172 +0000 UTC m=+123.800296831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.628568 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.628640 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q6rh\" (UniqueName: \"kubernetes.io/projected/57b9e245-7d13-4a38-8b88-1e23425ab322-kube-api-access-4q6rh\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.628914 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-utilities\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.629133 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-catalog-content\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.629220 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.129198093 +0000 UTC m=+123.801304812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.632518 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k8v54"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.638826 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.645919 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8v54"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.671698 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.671929 5120 generic.go:358] "Generic (PLEG): container finished" podID="97b101ad-fe48-408d-8965-af78f6b66e12" containerID="868ff1d37b1429df6e3b5692a09c5bb63a77b3339d590ec7dfc6b34f9dfdefd6" exitCode=2 Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.672039 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" event={"ID":"97b101ad-fe48-408d-8965-af78f6b66e12","Type":"ContainerDied","Data":"868ff1d37b1429df6e3b5692a09c5bb63a77b3339d590ec7dfc6b34f9dfdefd6"} Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.673263 5120 scope.go:117] "RemoveContainer" containerID="868ff1d37b1429df6e3b5692a09c5bb63a77b3339d590ec7dfc6b34f9dfdefd6" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.673959 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.674522 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-x44k9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.678477 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x44k9" podUID="2d163a97-86f5-4aa7-8013-c8f6a860724c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.730753 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.731129 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.231112487 +0000 UTC m=+123.903219136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.731437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4q6rh\" (UniqueName: \"kubernetes.io/projected/57b9e245-7d13-4a38-8b88-1e23425ab322-kube-api-access-4q6rh\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.731810 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-utilities\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.731976 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-catalog-content\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.732103 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-utilities\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.732541 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjgxj\" (UniqueName: \"kubernetes.io/projected/8e09ec29-0759-4098-9957-d1a998ed478c-kube-api-access-kjgxj\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.732498 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-utilities\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.732759 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-catalog-content\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.733115 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-catalog-content\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.733236 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.733442 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.23343483 +0000 UTC m=+123.905541479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.755015 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q6rh\" (UniqueName: \"kubernetes.io/projected/57b9e245-7d13-4a38-8b88-1e23425ab322-kube-api-access-4q6rh\") pod \"community-operators-lfzbg\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.785407 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.836263 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.336241061 +0000 UTC m=+124.008347710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.836294 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.836592 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.836688 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-utilities\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.836749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-catalog-content\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.836907 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kjgxj\" (UniqueName: \"kubernetes.io/projected/8e09ec29-0759-4098-9957-d1a998ed478c-kube-api-access-kjgxj\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.837890 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-utilities\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.842638 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.342614772 +0000 UTC m=+124.014721551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.843214 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-catalog-content\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.857661 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5czqn"] Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.871755 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjgxj\" (UniqueName: \"kubernetes.io/projected/8e09ec29-0759-4098-9957-d1a998ed478c-kube-api-access-kjgxj\") pod \"certified-operators-k8v54\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:10 crc kubenswrapper[5120]: W1208 19:31:10.872805 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8646edae_915b_459b_b385_491aaf3939ec.slice/crio-347e33352f282a4d3eb231380c1c26a52b6d99f5d98dbf43335816aa350699f4 WatchSource:0}: Error finding container 347e33352f282a4d3eb231380c1c26a52b6d99f5d98dbf43335816aa350699f4: Status 404 returned error can't find the container with id 347e33352f282a4d3eb231380c1c26a52b6d99f5d98dbf43335816aa350699f4 Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.939949 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.940203 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.440155567 +0000 UTC m=+124.112262226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.941466 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:10 crc kubenswrapper[5120]: E1208 19:31:10.941923 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.441906753 +0000 UTC m=+124.114013402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5120]: I1208 19:31:10.957447 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.046622 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.046979 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.546962205 +0000 UTC m=+124.219068854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.099479 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wv6j"] Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.147916 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.148301 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.64828928 +0000 UTC m=+124.320395929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: W1208 19:31:11.148536 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65e44f89_0e9d_46f9_a56b_7f01d1090930.slice/crio-221119f59cf04cd3f498eae9c61b113787cb109da6c5da590dbfc9ce43757bac WatchSource:0}: Error finding container 221119f59cf04cd3f498eae9c61b113787cb109da6c5da590dbfc9ce43757bac: Status 404 returned error can't find the container with id 221119f59cf04cd3f498eae9c61b113787cb109da6c5da590dbfc9ce43757bac Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.213665 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.249053 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.249376 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.749361897 +0000 UTC m=+124.421468546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.308580 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lfzbg"] Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.350271 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.350797 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.850780574 +0000 UTC m=+124.522887223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.451233 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.451585 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.951564721 +0000 UTC m=+124.623671380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.553083 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.553520 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.053504316 +0000 UTC m=+124.725610965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.601202 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.622101 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.622253 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.626318 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.626552 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.634344 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8v54"] Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.654403 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.654748 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.154728088 +0000 UTC m=+124.826834737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.685038 5120 generic.go:358] "Generic (PLEG): container finished" podID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerID="3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb" exitCode=0 Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.685198 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfzbg" event={"ID":"57b9e245-7d13-4a38-8b88-1e23425ab322","Type":"ContainerDied","Data":"3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.685225 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfzbg" event={"ID":"57b9e245-7d13-4a38-8b88-1e23425ab322","Type":"ContainerStarted","Data":"967cb21cae878941230f907820190b9e2ca3ade54d9ac75b395075c324e712f0"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.688969 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.689039 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-bz7k2" event={"ID":"97b101ad-fe48-408d-8965-af78f6b66e12","Type":"ContainerStarted","Data":"4cbd29c2d7db0adc47b38d62854398370e6e4647a75d28c48b6eb45b61abafa9"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.693743 5120 generic.go:358] "Generic (PLEG): container finished" podID="6b8c1a78-0576-4aa3-b7c9-aaf8f420c659" containerID="17350700dad4be183d81c8648ad8ca74d5986063fafad0c2aeeb3f6800ca8553" exitCode=0 Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.693816 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" event={"ID":"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659","Type":"ContainerDied","Data":"17350700dad4be183d81c8648ad8ca74d5986063fafad0c2aeeb3f6800ca8553"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.704229 5120 generic.go:358] "Generic (PLEG): container finished" podID="8646edae-915b-459b-b385-491aaf3939ec" containerID="ef0d6eeaddefba954e622716fa0751a09176da1ac7c43737ac521fd6b38c5d13" exitCode=0 Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.704444 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5czqn" event={"ID":"8646edae-915b-459b-b385-491aaf3939ec","Type":"ContainerDied","Data":"ef0d6eeaddefba954e622716fa0751a09176da1ac7c43737ac521fd6b38c5d13"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.704482 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5czqn" event={"ID":"8646edae-915b-459b-b385-491aaf3939ec","Type":"ContainerStarted","Data":"347e33352f282a4d3eb231380c1c26a52b6d99f5d98dbf43335816aa350699f4"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.715151 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8v54" event={"ID":"8e09ec29-0759-4098-9957-d1a998ed478c","Type":"ContainerStarted","Data":"dfc77bbec8f10e7a47b2a6334f80a6e17b1a771f297ec7fde3708d7dc292e99e"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.726408 5120 generic.go:358] "Generic (PLEG): container finished" podID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerID="81f69567f03f2e7f5638d92d7bbf40defd21515b8be615f6a5e1b40987159ee2" exitCode=0 Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.726609 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wv6j" event={"ID":"65e44f89-0e9d-46f9-a56b-7f01d1090930","Type":"ContainerDied","Data":"81f69567f03f2e7f5638d92d7bbf40defd21515b8be615f6a5e1b40987159ee2"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.726662 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wv6j" event={"ID":"65e44f89-0e9d-46f9-a56b-7f01d1090930","Type":"ContainerStarted","Data":"221119f59cf04cd3f498eae9c61b113787cb109da6c5da590dbfc9ce43757bac"} Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.756085 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6956b13d-429c-4df6-bc64-22e987847479-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.756158 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6956b13d-429c-4df6-bc64-22e987847479-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.756204 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.756468 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.256456715 +0000 UTC m=+124.928563364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.786253 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.815420 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.815565 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.820871 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.821097 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.857803 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.858004 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6956b13d-429c-4df6-bc64-22e987847479-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.858134 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6956b13d-429c-4df6-bc64-22e987847479-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.858593 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.358576725 +0000 UTC m=+125.030683374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.859284 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6956b13d-429c-4df6-bc64-22e987847479-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.898186 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6956b13d-429c-4df6-bc64-22e987847479-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.959664 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.960015 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:11 crc kubenswrapper[5120]: I1208 19:31:11.960035 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:11 crc kubenswrapper[5120]: E1208 19:31:11.960883 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.46086521 +0000 UTC m=+125.132971859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.000506 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.061348 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.061503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.061545 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.061673 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.061741 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.56172635 +0000 UTC m=+125.233832989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.095767 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.139664 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.164941 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.165331 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.665314627 +0000 UTC m=+125.337421276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.243496 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bfcr5"] Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.252178 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.255725 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.258019 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfcr5"] Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.268858 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.269086 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.769059057 +0000 UTC m=+125.441165706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.270289 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.270660 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.770652537 +0000 UTC m=+125.442759186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.371454 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.371601 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4qqm\" (UniqueName: \"kubernetes.io/projected/b216d4b8-5d23-462c-9fbc-bce0c620a83a-kube-api-access-h4qqm\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.371628 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-utilities\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.371729 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.871700654 +0000 UTC m=+125.543807303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.371841 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-catalog-content\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.371915 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.372420 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.872412586 +0000 UTC m=+125.544519235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.405531 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.460362 5120 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.473200 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.473337 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4qqm\" (UniqueName: \"kubernetes.io/projected/b216d4b8-5d23-462c-9fbc-bce0c620a83a-kube-api-access-h4qqm\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.473357 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-utilities\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.473408 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-catalog-content\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.473740 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:12.973695179 +0000 UTC m=+125.645801828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.473891 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-catalog-content\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.473971 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-utilities\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.478265 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.497883 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4qqm\" (UniqueName: \"kubernetes.io/projected/b216d4b8-5d23-462c-9fbc-bce0c620a83a-kube-api-access-h4qqm\") pod \"redhat-marketplace-bfcr5\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.570039 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.575348 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.575751 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.075734176 +0000 UTC m=+125.747840815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.640737 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-58gks"] Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.676529 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.676819 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.176802033 +0000 UTC m=+125.848908682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.677509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.677838 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.177827175 +0000 UTC m=+125.849933824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.749259 5120 generic.go:358] "Generic (PLEG): container finished" podID="8e09ec29-0759-4098-9957-d1a998ed478c" containerID="e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118" exitCode=0 Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.780587 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.780733 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.280711299 +0000 UTC m=+125.952817958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.780942 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:12 crc kubenswrapper[5120]: E1208 19:31:12.781275 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:13.281264787 +0000 UTC m=+125.953371436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-cdcv9" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.822046 5120 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T19:31:12.46039287Z","UUID":"44bf415e-7783-481a-8514-e3e0c106843c","Handler":null,"Name":"","Endpoint":""} Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.857786 5120 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.857821 5120 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.881483 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.884832 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:31:12 crc kubenswrapper[5120]: I1208 19:31:12.983281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.008136 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.008209 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.021514 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8v54" event={"ID":"8e09ec29-0759-4098-9957-d1a998ed478c","Type":"ContainerDied","Data":"e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.021573 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6956b13d-429c-4df6-bc64-22e987847479","Type":"ContainerStarted","Data":"50314e47ab3780c62876ca8fb182e62d1682b745f9d42bf939c165a7328fdba6"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.021590 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"03c0e578-6d1b-4592-b7ac-8bfad96bc11e","Type":"ContainerStarted","Data":"0c4cbca570fcf018de3858e5515002a883407b9d0cb2fb0833544ac37df2b422"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.021614 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-58gks"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.021677 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfcr5"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.021725 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" event={"ID":"bdfdfbe7-994c-4e98-ac93-9627b4264429","Type":"ContainerStarted","Data":"62e0b9f7ef41f4852ce32a584c9dacb1fa5a93b2e06822e0a479007315cd2c47"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.021871 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.088274 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.088315 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.094034 5120 patch_prober.go:28] interesting pod/console-64d44f6ddf-qrz2m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.094101 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-qrz2m" podUID="1fb4b841-4488-42e3-9fc7-2062a0a5c7a8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.104373 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-cdcv9\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.184876 5120 ???:1] "http: TLS handshake error from 192.168.126.11:56134: no serving certificate available for the kubelet" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.186613 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-catalog-content\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.186757 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46v9s\" (UniqueName: \"kubernetes.io/projected/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-kube-api-access-46v9s\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.187017 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-utilities\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.235310 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r42pt"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.236629 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.241849 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.249828 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.257527 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r42pt"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.288008 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-utilities\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.288080 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-catalog-content\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.288121 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-46v9s\" (UniqueName: \"kubernetes.io/projected/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-kube-api-access-46v9s\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.288683 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-catalog-content\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.288902 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-utilities\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.317740 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-46v9s\" (UniqueName: \"kubernetes.io/projected/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-kube-api-access-46v9s\") pod \"redhat-marketplace-58gks\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.364282 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.383638 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.392851 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.395682 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-secret-volume\") pod \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.395722 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc6tp\" (UniqueName: \"kubernetes.io/projected/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-kube-api-access-sc6tp\") pod \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.395744 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-config-volume\") pod \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\" (UID: \"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659\") " Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.395967 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76297\" (UniqueName: \"kubernetes.io/projected/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-kube-api-access-76297\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.396013 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-catalog-content\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.396039 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-utilities\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.396798 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b8c1a78-0576-4aa3-b7c9-aaf8f420c659" (UID: "6b8c1a78-0576-4aa3-b7c9-aaf8f420c659"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.401237 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b8c1a78-0576-4aa3-b7c9-aaf8f420c659" (UID: "6b8c1a78-0576-4aa3-b7c9-aaf8f420c659"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.403635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-kube-api-access-sc6tp" (OuterVolumeSpecName: "kube-api-access-sc6tp") pod "6b8c1a78-0576-4aa3-b7c9-aaf8f420c659" (UID: "6b8c1a78-0576-4aa3-b7c9-aaf8f420c659"). InnerVolumeSpecName "kube-api-access-sc6tp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.439217 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7n9rm"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.441575 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b8c1a78-0576-4aa3-b7c9-aaf8f420c659" containerName="collect-profiles" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.441599 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8c1a78-0576-4aa3-b7c9-aaf8f420c659" containerName="collect-profiles" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.441688 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b8c1a78-0576-4aa3-b7c9-aaf8f420c659" containerName="collect-profiles" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.450569 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.458978 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7n9rm"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.466266 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-x44k9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.466621 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-x44k9" podUID="2d163a97-86f5-4aa7-8013-c8f6a860724c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.497305 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76297\" (UniqueName: \"kubernetes.io/projected/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-kube-api-access-76297\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.497395 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-catalog-content\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.497453 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-utilities\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.497562 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.497596 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sc6tp\" (UniqueName: \"kubernetes.io/projected/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-kube-api-access-sc6tp\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.497608 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8c1a78-0576-4aa3-b7c9-aaf8f420c659-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.498051 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-catalog-content\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.498101 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-utilities\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.519016 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76297\" (UniqueName: \"kubernetes.io/projected/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-kube-api-access-76297\") pod \"redhat-operators-r42pt\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.562041 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.598623 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-utilities\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.598809 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgk7p\" (UniqueName: \"kubernetes.io/projected/2e978cf7-a02c-43e6-b689-1ffffd0893a0-kube-api-access-hgk7p\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.599007 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-catalog-content\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.644899 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-58gks"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.669049 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 19:31:13 crc kubenswrapper[5120]: W1208 19:31:13.673183 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdc58252_6ea1_4687_b5eb_ed95b19e0aab.slice/crio-f46b308abfacec1aa7904289588631f20a6731d1aee444966b7bdec2d7c6de26 WatchSource:0}: Error finding container f46b308abfacec1aa7904289588631f20a6731d1aee444966b7bdec2d7c6de26: Status 404 returned error can't find the container with id f46b308abfacec1aa7904289588631f20a6731d1aee444966b7bdec2d7c6de26 Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.678155 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-wt7pp" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.704243 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-catalog-content\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.704365 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-utilities\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.704437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgk7p\" (UniqueName: \"kubernetes.io/projected/2e978cf7-a02c-43e6-b689-1ffffd0893a0-kube-api-access-hgk7p\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.704632 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-catalog-content\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.704871 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-utilities\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.751468 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgk7p\" (UniqueName: \"kubernetes.io/projected/2e978cf7-a02c-43e6-b689-1ffffd0893a0-kube-api-access-hgk7p\") pod \"redhat-operators-7n9rm\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.778016 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.800913 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"03c0e578-6d1b-4592-b7ac-8bfad96bc11e","Type":"ContainerStarted","Data":"b04fec3f94d5534604b2fdc863b03e25d950fea71cc726445bfd7f66b6413b00"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.811107 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" event={"ID":"bdfdfbe7-994c-4e98-ac93-9627b4264429","Type":"ContainerStarted","Data":"a5a9326950f74bbe4a6736adfd6c518b914289e07fa803947d3960706bf7538c"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.811269 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" event={"ID":"bdfdfbe7-994c-4e98-ac93-9627b4264429","Type":"ContainerStarted","Data":"d97770233e31ff8c078fe542727060d1e9d94f4fcdd3c4f485d270449010185a"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.813130 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58gks" event={"ID":"bdc58252-6ea1-4687-b5eb-ed95b19e0aab","Type":"ContainerStarted","Data":"f46b308abfacec1aa7904289588631f20a6731d1aee444966b7bdec2d7c6de26"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.815261 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" event={"ID":"6b8c1a78-0576-4aa3-b7c9-aaf8f420c659","Type":"ContainerDied","Data":"c8f8e8502f0ae6b82e62f22f74308ec5831a8aebf1470219bb48b134e64fb528"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.815303 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8f8e8502f0ae6b82e62f22f74308ec5831a8aebf1470219bb48b134e64fb528" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.815428 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-cblk8" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.822771 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.822740705 podStartE2EDuration="2.822740705s" podCreationTimestamp="2025-12-08 19:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.816195608 +0000 UTC m=+126.488302267" watchObservedRunningTime="2025-12-08 19:31:13.822740705 +0000 UTC m=+126.494847354" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.827309 5120 generic.go:358] "Generic (PLEG): container finished" podID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerID="31b9fe3a1e8038a00557fb573b5339f460dfb44950be4c4182d6e10de22d2aa6" exitCode=0 Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.827699 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfcr5" event={"ID":"b216d4b8-5d23-462c-9fbc-bce0c620a83a","Type":"ContainerDied","Data":"31b9fe3a1e8038a00557fb573b5339f460dfb44950be4c4182d6e10de22d2aa6"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.827749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfcr5" event={"ID":"b216d4b8-5d23-462c-9fbc-bce0c620a83a","Type":"ContainerStarted","Data":"acf2d2d834656d2770734c49e64e95c5299c95631547ad5b92399b2be8ed3c0e"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.839020 5120 generic.go:358] "Generic (PLEG): container finished" podID="6956b13d-429c-4df6-bc64-22e987847479" containerID="60cfe9d4abeb2f82561741719b5a424dae44d91bb829328b177d8059d5533abf" exitCode=0 Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.839108 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6956b13d-429c-4df6-bc64-22e987847479","Type":"ContainerDied","Data":"60cfe9d4abeb2f82561741719b5a424dae44d91bb829328b177d8059d5533abf"} Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.843783 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tjxf5" podStartSLOduration=13.843767277 podStartE2EDuration="13.843767277s" podCreationTimestamp="2025-12-08 19:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:13.842490676 +0000 UTC m=+126.514597325" watchObservedRunningTime="2025-12-08 19:31:13.843767277 +0000 UTC m=+126.515873926" Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.907596 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r42pt"] Dec 08 19:31:13 crc kubenswrapper[5120]: I1208 19:31:13.927050 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cdcv9"] Dec 08 19:31:13 crc kubenswrapper[5120]: W1208 19:31:13.932333 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dddc5e3_08fc_4488_aec8_6920e4ff05ed.slice/crio-0894d75775895253009b21961e20048338b1e89891fc949a157fee74df4bf496 WatchSource:0}: Error finding container 0894d75775895253009b21961e20048338b1e89891fc949a157fee74df4bf496: Status 404 returned error can't find the container with id 0894d75775895253009b21961e20048338b1e89891fc949a157fee74df4bf496 Dec 08 19:31:13 crc kubenswrapper[5120]: W1208 19:31:13.940595 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1750a48_cdf8_4fc3_b3c1_4577527c256b.slice/crio-a06f44e52780f3e67bc2c999c0a146b3eca361b420d2400d06742348eb701698 WatchSource:0}: Error finding container a06f44e52780f3e67bc2c999c0a146b3eca361b420d2400d06742348eb701698: Status 404 returned error can't find the container with id a06f44e52780f3e67bc2c999c0a146b3eca361b420d2400d06742348eb701698 Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.312950 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7n9rm"] Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.862624 5120 generic.go:358] "Generic (PLEG): container finished" podID="03c0e578-6d1b-4592-b7ac-8bfad96bc11e" containerID="b04fec3f94d5534604b2fdc863b03e25d950fea71cc726445bfd7f66b6413b00" exitCode=0 Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.862712 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"03c0e578-6d1b-4592-b7ac-8bfad96bc11e","Type":"ContainerDied","Data":"b04fec3f94d5534604b2fdc863b03e25d950fea71cc726445bfd7f66b6413b00"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.867454 5120 generic.go:358] "Generic (PLEG): container finished" podID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerID="7c39f75d4bad8b6bf323ba92245fff51627ae6dc01f4ab35306693c95d1cc8c7" exitCode=0 Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.867561 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58gks" event={"ID":"bdc58252-6ea1-4687-b5eb-ed95b19e0aab","Type":"ContainerDied","Data":"7c39f75d4bad8b6bf323ba92245fff51627ae6dc01f4ab35306693c95d1cc8c7"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.871752 5120 generic.go:358] "Generic (PLEG): container finished" podID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerID="234c609a54b29b9300fb52784b1cae19482bc30bdb2447f2e719e2d69509b719" exitCode=0 Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.871936 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r42pt" event={"ID":"7dddc5e3-08fc-4488-aec8-6920e4ff05ed","Type":"ContainerDied","Data":"234c609a54b29b9300fb52784b1cae19482bc30bdb2447f2e719e2d69509b719"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.872009 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r42pt" event={"ID":"7dddc5e3-08fc-4488-aec8-6920e4ff05ed","Type":"ContainerStarted","Data":"0894d75775895253009b21961e20048338b1e89891fc949a157fee74df4bf496"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.877843 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" event={"ID":"b1750a48-cdf8-4fc3-b3c1-4577527c256b","Type":"ContainerStarted","Data":"272123ebf619ab54a9cef4678b4e6074c5e4b50e04f9b434c39812e3c14873e5"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.877887 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" event={"ID":"b1750a48-cdf8-4fc3-b3c1-4577527c256b","Type":"ContainerStarted","Data":"a06f44e52780f3e67bc2c999c0a146b3eca361b420d2400d06742348eb701698"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.877948 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.880729 5120 generic.go:358] "Generic (PLEG): container finished" podID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerID="7f7d84019e850a2dfc6b52c0462b149bb6efbbd1f0cb8cbc58599cec56edfae1" exitCode=0 Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.880840 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n9rm" event={"ID":"2e978cf7-a02c-43e6-b689-1ffffd0893a0","Type":"ContainerDied","Data":"7f7d84019e850a2dfc6b52c0462b149bb6efbbd1f0cb8cbc58599cec56edfae1"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.880861 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n9rm" event={"ID":"2e978cf7-a02c-43e6-b689-1ffffd0893a0","Type":"ContainerStarted","Data":"792e42b1636fb4d5f85cb539474a614edecf96f4cea7aa802ead035ffc6fa496"} Dec 08 19:31:14 crc kubenswrapper[5120]: I1208 19:31:14.930667 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" podStartSLOduration=107.930646306 podStartE2EDuration="1m47.930646306s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:14.926705491 +0000 UTC m=+127.598812140" watchObservedRunningTime="2025-12-08 19:31:14.930646306 +0000 UTC m=+127.602752945" Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.122876 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.139846 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6956b13d-429c-4df6-bc64-22e987847479-kube-api-access\") pod \"6956b13d-429c-4df6-bc64-22e987847479\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.139914 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6956b13d-429c-4df6-bc64-22e987847479-kubelet-dir\") pod \"6956b13d-429c-4df6-bc64-22e987847479\" (UID: \"6956b13d-429c-4df6-bc64-22e987847479\") " Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.140240 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6956b13d-429c-4df6-bc64-22e987847479-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6956b13d-429c-4df6-bc64-22e987847479" (UID: "6956b13d-429c-4df6-bc64-22e987847479"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.156466 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6956b13d-429c-4df6-bc64-22e987847479-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6956b13d-429c-4df6-bc64-22e987847479" (UID: "6956b13d-429c-4df6-bc64-22e987847479"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.241320 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6956b13d-429c-4df6-bc64-22e987847479-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.241356 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6956b13d-429c-4df6-bc64-22e987847479-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.892308 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.892306 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"6956b13d-429c-4df6-bc64-22e987847479","Type":"ContainerDied","Data":"50314e47ab3780c62876ca8fb182e62d1682b745f9d42bf939c165a7328fdba6"} Dec 08 19:31:15 crc kubenswrapper[5120]: I1208 19:31:15.892505 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50314e47ab3780c62876ca8fb182e62d1682b745f9d42bf939c165a7328fdba6" Dec 08 19:31:17 crc kubenswrapper[5120]: E1208 19:31:17.380411 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:17 crc kubenswrapper[5120]: E1208 19:31:17.383044 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:17 crc kubenswrapper[5120]: E1208 19:31:17.384401 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:17 crc kubenswrapper[5120]: E1208 19:31:17.384439 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:19 crc kubenswrapper[5120]: I1208 19:31:19.677918 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sjvd8" Dec 08 19:31:20 crc kubenswrapper[5120]: I1208 19:31:20.680931 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-x44k9" Dec 08 19:31:23 crc kubenswrapper[5120]: I1208 19:31:23.094847 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:23 crc kubenswrapper[5120]: I1208 19:31:23.103994 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-qrz2m" Dec 08 19:31:23 crc kubenswrapper[5120]: I1208 19:31:23.459472 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39604: no serving certificate available for the kubelet" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.617902 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.618093 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.618233 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.618288 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.620909 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.621073 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.622770 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.631344 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.639009 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.644547 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.644961 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.679743 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.719498 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.721980 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.734371 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35fbb2df-5282-4e19-b92d-5b7ffd03f707-metrics-certs\") pod \"network-metrics-daemon-hvzp8\" (UID: \"35fbb2df-5282-4e19-b92d-5b7ffd03f707\") " pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.765443 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.773569 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.793026 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.801672 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hvzp8" Dec 08 19:31:26 crc kubenswrapper[5120]: I1208 19:31:26.893532 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:27 crc kubenswrapper[5120]: I1208 19:31:27.224655 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:31:27 crc kubenswrapper[5120]: E1208 19:31:27.363420 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:27 crc kubenswrapper[5120]: E1208 19:31:27.366946 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:27 crc kubenswrapper[5120]: E1208 19:31:27.368746 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:27 crc kubenswrapper[5120]: E1208 19:31:27.368828 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.285169 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.309090 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kube-api-access\") pod \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.309250 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kubelet-dir\") pod \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\" (UID: \"03c0e578-6d1b-4592-b7ac-8bfad96bc11e\") " Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.309622 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03c0e578-6d1b-4592-b7ac-8bfad96bc11e" (UID: "03c0e578-6d1b-4592-b7ac-8bfad96bc11e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.310356 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.317647 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03c0e578-6d1b-4592-b7ac-8bfad96bc11e" (UID: "03c0e578-6d1b-4592-b7ac-8bfad96bc11e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.411497 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c0e578-6d1b-4592-b7ac-8bfad96bc11e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:32 crc kubenswrapper[5120]: I1208 19:31:32.866912 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hvzp8"] Dec 08 19:31:32 crc kubenswrapper[5120]: W1208 19:31:32.943515 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-da55da5928baed1294862f47d46838a665933235a683b9b1eb16bb22979b13ae WatchSource:0}: Error finding container da55da5928baed1294862f47d46838a665933235a683b9b1eb16bb22979b13ae: Status 404 returned error can't find the container with id da55da5928baed1294862f47d46838a665933235a683b9b1eb16bb22979b13ae Dec 08 19:31:32 crc kubenswrapper[5120]: W1208 19:31:32.943928 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35fbb2df_5282_4e19_b92d_5b7ffd03f707.slice/crio-5ca0265421314a5c9c890c08a98fcb97127980c7801a542f143422f94f8be049 WatchSource:0}: Error finding container 5ca0265421314a5c9c890c08a98fcb97127980c7801a542f143422f94f8be049: Status 404 returned error can't find the container with id 5ca0265421314a5c9c890c08a98fcb97127980c7801a542f143422f94f8be049 Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.004027 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"03c0e578-6d1b-4592-b7ac-8bfad96bc11e","Type":"ContainerDied","Data":"0c4cbca570fcf018de3858e5515002a883407b9d0cb2fb0833544ac37df2b422"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.004087 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4cbca570fcf018de3858e5515002a883407b9d0cb2fb0833544ac37df2b422" Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.004443 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.010470 5120 generic.go:358] "Generic (PLEG): container finished" podID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerID="2d83d6f775d20e0f358835bbb9722ec89d9f913b36b2c762be107c91ae8c0407" exitCode=0 Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.010560 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58gks" event={"ID":"bdc58252-6ea1-4687-b5eb-ed95b19e0aab","Type":"ContainerDied","Data":"2d83d6f775d20e0f358835bbb9722ec89d9f913b36b2c762be107c91ae8c0407"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.023356 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"069ce558752427adefde671d4eff494e6b47525958fe29c749d5fd954e5f862c"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.028420 5120 generic.go:358] "Generic (PLEG): container finished" podID="8646edae-915b-459b-b385-491aaf3939ec" containerID="f9b63c6e8b1da64494d8bcc3ba13c52db4ab87a615234c54b583042ac638c8b4" exitCode=0 Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.028471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5czqn" event={"ID":"8646edae-915b-459b-b385-491aaf3939ec","Type":"ContainerDied","Data":"f9b63c6e8b1da64494d8bcc3ba13c52db4ab87a615234c54b583042ac638c8b4"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.031937 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" event={"ID":"35fbb2df-5282-4e19-b92d-5b7ffd03f707","Type":"ContainerStarted","Data":"5ca0265421314a5c9c890c08a98fcb97127980c7801a542f143422f94f8be049"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.038684 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r42pt" event={"ID":"7dddc5e3-08fc-4488-aec8-6920e4ff05ed","Type":"ContainerStarted","Data":"7c146e0e0f3153c74e2989ad42ecedcb541ad8efbf71b647fd520c4e524bb5d5"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.057705 5120 generic.go:358] "Generic (PLEG): container finished" podID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerID="9f01c812bf70c97a712e042e2a74d6da3615aeb88aaf5996ad63aadb99398fad" exitCode=0 Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.057966 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfcr5" event={"ID":"b216d4b8-5d23-462c-9fbc-bce0c620a83a","Type":"ContainerDied","Data":"9f01c812bf70c97a712e042e2a74d6da3615aeb88aaf5996ad63aadb99398fad"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.090372 5120 generic.go:358] "Generic (PLEG): container finished" podID="8e09ec29-0759-4098-9957-d1a998ed478c" containerID="87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd" exitCode=0 Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.090498 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8v54" event={"ID":"8e09ec29-0759-4098-9957-d1a998ed478c","Type":"ContainerDied","Data":"87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.097874 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wv6j" event={"ID":"65e44f89-0e9d-46f9-a56b-7f01d1090930","Type":"ContainerStarted","Data":"acf3b035c4e09043732b1e8db6f3c0cd7e173ef035e4d77c6e2a9a005d2c389d"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.124674 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n9rm" event={"ID":"2e978cf7-a02c-43e6-b689-1ffffd0893a0","Type":"ContainerStarted","Data":"a651081ee5190bdef92a35842edbf1b53a06a7b9062d5406548b22ebf2da53e0"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.129514 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"da55da5928baed1294862f47d46838a665933235a683b9b1eb16bb22979b13ae"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.135704 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfzbg" event={"ID":"57b9e245-7d13-4a38-8b88-1e23425ab322","Type":"ContainerStarted","Data":"1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450"} Dec 08 19:31:33 crc kubenswrapper[5120]: I1208 19:31:33.139829 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"65065d1582fdba2fc701e8adb5ffe53fe7ed57453f432a94ff0b332f9dfaa9e1"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.150224 5120 generic.go:358] "Generic (PLEG): container finished" podID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerID="7c146e0e0f3153c74e2989ad42ecedcb541ad8efbf71b647fd520c4e524bb5d5" exitCode=0 Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.150313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r42pt" event={"ID":"7dddc5e3-08fc-4488-aec8-6920e4ff05ed","Type":"ContainerDied","Data":"7c146e0e0f3153c74e2989ad42ecedcb541ad8efbf71b647fd520c4e524bb5d5"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.154216 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfcr5" event={"ID":"b216d4b8-5d23-462c-9fbc-bce0c620a83a","Type":"ContainerStarted","Data":"cdbcf959715feacd9fbbc73bce688eb4c212422fbb3f88cd62a670934aad65e3"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.155939 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8v54" event={"ID":"8e09ec29-0759-4098-9957-d1a998ed478c","Type":"ContainerStarted","Data":"47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.157841 5120 generic.go:358] "Generic (PLEG): container finished" podID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerID="acf3b035c4e09043732b1e8db6f3c0cd7e173ef035e4d77c6e2a9a005d2c389d" exitCode=0 Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.157897 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wv6j" event={"ID":"65e44f89-0e9d-46f9-a56b-7f01d1090930","Type":"ContainerDied","Data":"acf3b035c4e09043732b1e8db6f3c0cd7e173ef035e4d77c6e2a9a005d2c389d"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.157913 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wv6j" event={"ID":"65e44f89-0e9d-46f9-a56b-7f01d1090930","Type":"ContainerStarted","Data":"7c2adb36c19be4c7094db31684c4fd2f20aa98a4a06e95467dbe133ec0868723"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.159786 5120 generic.go:358] "Generic (PLEG): container finished" podID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerID="a651081ee5190bdef92a35842edbf1b53a06a7b9062d5406548b22ebf2da53e0" exitCode=0 Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.159868 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n9rm" event={"ID":"2e978cf7-a02c-43e6-b689-1ffffd0893a0","Type":"ContainerDied","Data":"a651081ee5190bdef92a35842edbf1b53a06a7b9062d5406548b22ebf2da53e0"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.168323 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"5cdc9ac973d9f6780598eb7023125d6888b264658c3a24f0f44c9200f1e9048c"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.181749 5120 generic.go:358] "Generic (PLEG): container finished" podID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerID="1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450" exitCode=0 Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.181812 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfzbg" event={"ID":"57b9e245-7d13-4a38-8b88-1e23425ab322","Type":"ContainerDied","Data":"1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.181837 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfzbg" event={"ID":"57b9e245-7d13-4a38-8b88-1e23425ab322","Type":"ContainerStarted","Data":"0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.184958 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"810b4197c3ae11c83a42c976277fc4a270567950129976b9c6773118dae9fa8c"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.188097 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.193641 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k8v54" podStartSLOduration=4.754593703 podStartE2EDuration="24.193626789s" podCreationTimestamp="2025-12-08 19:31:10 +0000 UTC" firstStartedPulling="2025-12-08 19:31:13.023894317 +0000 UTC m=+125.696000976" lastFinishedPulling="2025-12-08 19:31:32.462927413 +0000 UTC m=+145.135034062" observedRunningTime="2025-12-08 19:31:34.191238604 +0000 UTC m=+146.863345273" watchObservedRunningTime="2025-12-08 19:31:34.193626789 +0000 UTC m=+146.865733438" Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.193674 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58gks" event={"ID":"bdc58252-6ea1-4687-b5eb-ed95b19e0aab","Type":"ContainerStarted","Data":"05189ed3e33a5fcb3c9bbe36566445efe0794d42442ff7c5d0f9f0e8782a3616"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.219236 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4104d0d61b4f23523bd450cc681c9b947d303e51e07096fd9a4f42b41d9842d5"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.231158 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5czqn" event={"ID":"8646edae-915b-459b-b385-491aaf3939ec","Type":"ContainerStarted","Data":"2d60d9c588c4f80451d6bc26fb93726419d0ae8b715d83cbd78af8f264a644a4"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.243767 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" event={"ID":"35fbb2df-5282-4e19-b92d-5b7ffd03f707","Type":"ContainerStarted","Data":"e4c4e0006958c87c0a0bb6b6b9e3fd6bbabf38dad41657d54d22fd83a862f8ed"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.243797 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hvzp8" event={"ID":"35fbb2df-5282-4e19-b92d-5b7ffd03f707","Type":"ContainerStarted","Data":"2b3f73459bc52694e705feb90f990ca67d277dfad35c13a56648e9b8c66c2096"} Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.257460 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wv6j" podStartSLOduration=3.59345488 podStartE2EDuration="24.257441915s" podCreationTimestamp="2025-12-08 19:31:10 +0000 UTC" firstStartedPulling="2025-12-08 19:31:11.727295876 +0000 UTC m=+124.399402525" lastFinishedPulling="2025-12-08 19:31:32.391282891 +0000 UTC m=+145.063389560" observedRunningTime="2025-12-08 19:31:34.257414234 +0000 UTC m=+146.929520883" watchObservedRunningTime="2025-12-08 19:31:34.257441915 +0000 UTC m=+146.929548564" Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.288470 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bfcr5" podStartSLOduration=3.747654426 podStartE2EDuration="22.28845179s" podCreationTimestamp="2025-12-08 19:31:12 +0000 UTC" firstStartedPulling="2025-12-08 19:31:13.828389172 +0000 UTC m=+126.500495821" lastFinishedPulling="2025-12-08 19:31:32.369186526 +0000 UTC m=+145.041293185" observedRunningTime="2025-12-08 19:31:34.288197332 +0000 UTC m=+146.960304001" watchObservedRunningTime="2025-12-08 19:31:34.28845179 +0000 UTC m=+146.960558439" Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.306893 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lfzbg" podStartSLOduration=3.517476918 podStartE2EDuration="24.30687735s" podCreationTimestamp="2025-12-08 19:31:10 +0000 UTC" firstStartedPulling="2025-12-08 19:31:11.685956792 +0000 UTC m=+124.358063441" lastFinishedPulling="2025-12-08 19:31:32.475357224 +0000 UTC m=+145.147463873" observedRunningTime="2025-12-08 19:31:34.301546262 +0000 UTC m=+146.973652921" watchObservedRunningTime="2025-12-08 19:31:34.30687735 +0000 UTC m=+146.978983999" Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.386822 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-58gks" podStartSLOduration=4.850806512 podStartE2EDuration="22.386808812s" podCreationTimestamp="2025-12-08 19:31:12 +0000 UTC" firstStartedPulling="2025-12-08 19:31:14.868263489 +0000 UTC m=+127.540370138" lastFinishedPulling="2025-12-08 19:31:32.404265769 +0000 UTC m=+145.076372438" observedRunningTime="2025-12-08 19:31:34.381402072 +0000 UTC m=+147.053508731" watchObservedRunningTime="2025-12-08 19:31:34.386808812 +0000 UTC m=+147.058915451" Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.429009 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5czqn" podStartSLOduration=3.752473287 podStartE2EDuration="24.428975517s" podCreationTimestamp="2025-12-08 19:31:10 +0000 UTC" firstStartedPulling="2025-12-08 19:31:11.705343834 +0000 UTC m=+124.377450483" lastFinishedPulling="2025-12-08 19:31:32.381846064 +0000 UTC m=+145.053952713" observedRunningTime="2025-12-08 19:31:34.425941952 +0000 UTC m=+147.098048601" watchObservedRunningTime="2025-12-08 19:31:34.428975517 +0000 UTC m=+147.101082166" Dec 08 19:31:34 crc kubenswrapper[5120]: I1208 19:31:34.429159 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hvzp8" podStartSLOduration=127.429155673 podStartE2EDuration="2m7.429155673s" podCreationTimestamp="2025-12-08 19:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:34.407627167 +0000 UTC m=+147.079733826" watchObservedRunningTime="2025-12-08 19:31:34.429155673 +0000 UTC m=+147.101262322" Dec 08 19:31:35 crc kubenswrapper[5120]: I1208 19:31:35.253324 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n9rm" event={"ID":"2e978cf7-a02c-43e6-b689-1ffffd0893a0","Type":"ContainerStarted","Data":"ee2db2eea96616dce34005712bfc119785917ca12f3514b2fd5a070baacb5725"} Dec 08 19:31:35 crc kubenswrapper[5120]: I1208 19:31:35.259150 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r42pt" event={"ID":"7dddc5e3-08fc-4488-aec8-6920e4ff05ed","Type":"ContainerStarted","Data":"daa75d2c262dbd2c8847d5b9282f050fb3c2abf9af688f8668fad1ca6e95319c"} Dec 08 19:31:35 crc kubenswrapper[5120]: I1208 19:31:35.273532 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7n9rm" podStartSLOduration=4.728872726 podStartE2EDuration="22.273514836s" podCreationTimestamp="2025-12-08 19:31:13 +0000 UTC" firstStartedPulling="2025-12-08 19:31:14.881895369 +0000 UTC m=+127.554002018" lastFinishedPulling="2025-12-08 19:31:32.426537469 +0000 UTC m=+145.098644128" observedRunningTime="2025-12-08 19:31:35.272454163 +0000 UTC m=+147.944560812" watchObservedRunningTime="2025-12-08 19:31:35.273514836 +0000 UTC m=+147.945621485" Dec 08 19:31:35 crc kubenswrapper[5120]: I1208 19:31:35.290435 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r42pt" podStartSLOduration=4.73635288 podStartE2EDuration="22.290418067s" podCreationTimestamp="2025-12-08 19:31:13 +0000 UTC" firstStartedPulling="2025-12-08 19:31:14.87240814 +0000 UTC m=+127.544514789" lastFinishedPulling="2025-12-08 19:31:32.426473327 +0000 UTC m=+145.098579976" observedRunningTime="2025-12-08 19:31:35.287711692 +0000 UTC m=+147.959818341" watchObservedRunningTime="2025-12-08 19:31:35.290418067 +0000 UTC m=+147.962524716" Dec 08 19:31:35 crc kubenswrapper[5120]: I1208 19:31:35.898795 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:31:37 crc kubenswrapper[5120]: E1208 19:31:37.364873 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:37 crc kubenswrapper[5120]: E1208 19:31:37.366859 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:37 crc kubenswrapper[5120]: E1208 19:31:37.369085 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:37 crc kubenswrapper[5120]: E1208 19:31:37.369131 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.560218 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.560616 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.674796 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.674844 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.684463 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-dkxdr" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.753013 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.762732 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.786326 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.786379 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.820494 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.958277 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.958324 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:40 crc kubenswrapper[5120]: I1208 19:31:40.994711 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.290530 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bfzmj_1d77b19d-39e0-468f-b4b4-63ec407092de/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.290563 5120 generic.go:358] "Generic (PLEG): container finished" podID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" exitCode=137 Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.291419 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" event={"ID":"1d77b19d-39e0-468f-b4b4-63ec407092de","Type":"ContainerDied","Data":"7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0"} Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.330210 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.332995 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.340425 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.343657 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:41 crc kubenswrapper[5120]: I1208 19:31:41.591781 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-rbgvm"] Dec 08 19:31:42 crc kubenswrapper[5120]: I1208 19:31:42.571203 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:42 crc kubenswrapper[5120]: I1208 19:31:42.571253 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:42 crc kubenswrapper[5120]: I1208 19:31:42.613651 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.041578 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lfzbg"] Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.112345 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bfzmj_1d77b19d-39e0-468f-b4b4-63ec407092de/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.112418 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.174656 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vdj2\" (UniqueName: \"kubernetes.io/projected/1d77b19d-39e0-468f-b4b4-63ec407092de-kube-api-access-5vdj2\") pod \"1d77b19d-39e0-468f-b4b4-63ec407092de\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.174920 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1d77b19d-39e0-468f-b4b4-63ec407092de-tuning-conf-dir\") pod \"1d77b19d-39e0-468f-b4b4-63ec407092de\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.175058 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d77b19d-39e0-468f-b4b4-63ec407092de-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "1d77b19d-39e0-468f-b4b4-63ec407092de" (UID: "1d77b19d-39e0-468f-b4b4-63ec407092de"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.175097 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1d77b19d-39e0-468f-b4b4-63ec407092de-cni-sysctl-allowlist\") pod \"1d77b19d-39e0-468f-b4b4-63ec407092de\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.175226 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1d77b19d-39e0-468f-b4b4-63ec407092de-ready\") pod \"1d77b19d-39e0-468f-b4b4-63ec407092de\" (UID: \"1d77b19d-39e0-468f-b4b4-63ec407092de\") " Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.175520 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d77b19d-39e0-468f-b4b4-63ec407092de-ready" (OuterVolumeSpecName: "ready") pod "1d77b19d-39e0-468f-b4b4-63ec407092de" (UID: "1d77b19d-39e0-468f-b4b4-63ec407092de"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.175558 5120 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1d77b19d-39e0-468f-b4b4-63ec407092de-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.186488 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d77b19d-39e0-468f-b4b4-63ec407092de-kube-api-access-5vdj2" (OuterVolumeSpecName: "kube-api-access-5vdj2") pod "1d77b19d-39e0-468f-b4b4-63ec407092de" (UID: "1d77b19d-39e0-468f-b4b4-63ec407092de"). InnerVolumeSpecName "kube-api-access-5vdj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.192665 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d77b19d-39e0-468f-b4b4-63ec407092de-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "1d77b19d-39e0-468f-b4b4-63ec407092de" (UID: "1d77b19d-39e0-468f-b4b4-63ec407092de"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.279795 5120 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1d77b19d-39e0-468f-b4b4-63ec407092de-ready\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.280021 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vdj2\" (UniqueName: \"kubernetes.io/projected/1d77b19d-39e0-468f-b4b4-63ec407092de-kube-api-access-5vdj2\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.280091 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1d77b19d-39e0-468f-b4b4-63ec407092de-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.305122 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-bfzmj_1d77b19d-39e0-468f-b4b4-63ec407092de/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.305468 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.305459 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-bfzmj" event={"ID":"1d77b19d-39e0-468f-b4b4-63ec407092de","Type":"ContainerDied","Data":"5e145a60597d6f3014c340e9cece9535aa1bd1044922ad215d8fa7a647056b84"} Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.305636 5120 scope.go:117] "RemoveContainer" containerID="7ffa92f6cc346170f9ad3e1db675020c695e901dd13d1f45399cc7cdc32f7dc0" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.306206 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lfzbg" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="registry-server" containerID="cri-o://0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a" gracePeriod=2 Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.337766 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bfzmj"] Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.341677 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-bfzmj"] Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.365634 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.366583 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.369645 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.408067 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.563510 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.563814 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.600500 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.637251 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8v54"] Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.637573 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k8v54" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="registry-server" containerID="cri-o://47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6" gracePeriod=2 Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.665913 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" path="/var/lib/kubelet/pods/1d77b19d-39e0-468f-b4b4-63ec407092de/volumes" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.778904 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.779151 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.816304 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.946464 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:43 crc kubenswrapper[5120]: I1208 19:31:43.972418 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34686: no serving certificate available for the kubelet" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.090073 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-utilities\") pod \"8e09ec29-0759-4098-9957-d1a998ed478c\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.090408 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjgxj\" (UniqueName: \"kubernetes.io/projected/8e09ec29-0759-4098-9957-d1a998ed478c-kube-api-access-kjgxj\") pod \"8e09ec29-0759-4098-9957-d1a998ed478c\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.090516 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-catalog-content\") pod \"8e09ec29-0759-4098-9957-d1a998ed478c\" (UID: \"8e09ec29-0759-4098-9957-d1a998ed478c\") " Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.092361 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-utilities" (OuterVolumeSpecName: "utilities") pod "8e09ec29-0759-4098-9957-d1a998ed478c" (UID: "8e09ec29-0759-4098-9957-d1a998ed478c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.099396 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e09ec29-0759-4098-9957-d1a998ed478c-kube-api-access-kjgxj" (OuterVolumeSpecName: "kube-api-access-kjgxj") pod "8e09ec29-0759-4098-9957-d1a998ed478c" (UID: "8e09ec29-0759-4098-9957-d1a998ed478c"). InnerVolumeSpecName "kube-api-access-kjgxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.126407 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e09ec29-0759-4098-9957-d1a998ed478c" (UID: "8e09ec29-0759-4098-9957-d1a998ed478c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.164796 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.192361 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.192392 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e09ec29-0759-4098-9957-d1a998ed478c-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.192402 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kjgxj\" (UniqueName: \"kubernetes.io/projected/8e09ec29-0759-4098-9957-d1a998ed478c-kube-api-access-kjgxj\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.293742 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-catalog-content\") pod \"57b9e245-7d13-4a38-8b88-1e23425ab322\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.293798 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q6rh\" (UniqueName: \"kubernetes.io/projected/57b9e245-7d13-4a38-8b88-1e23425ab322-kube-api-access-4q6rh\") pod \"57b9e245-7d13-4a38-8b88-1e23425ab322\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.293823 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-utilities\") pod \"57b9e245-7d13-4a38-8b88-1e23425ab322\" (UID: \"57b9e245-7d13-4a38-8b88-1e23425ab322\") " Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.294944 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-utilities" (OuterVolumeSpecName: "utilities") pod "57b9e245-7d13-4a38-8b88-1e23425ab322" (UID: "57b9e245-7d13-4a38-8b88-1e23425ab322"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.297210 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b9e245-7d13-4a38-8b88-1e23425ab322-kube-api-access-4q6rh" (OuterVolumeSpecName: "kube-api-access-4q6rh") pod "57b9e245-7d13-4a38-8b88-1e23425ab322" (UID: "57b9e245-7d13-4a38-8b88-1e23425ab322"). InnerVolumeSpecName "kube-api-access-4q6rh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.319048 5120 generic.go:358] "Generic (PLEG): container finished" podID="8e09ec29-0759-4098-9957-d1a998ed478c" containerID="47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6" exitCode=0 Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.319209 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8v54" event={"ID":"8e09ec29-0759-4098-9957-d1a998ed478c","Type":"ContainerDied","Data":"47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6"} Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.319234 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8v54" event={"ID":"8e09ec29-0759-4098-9957-d1a998ed478c","Type":"ContainerDied","Data":"dfc77bbec8f10e7a47b2a6334f80a6e17b1a771f297ec7fde3708d7dc292e99e"} Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.319251 5120 scope.go:117] "RemoveContainer" containerID="47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.319364 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8v54" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.323802 5120 generic.go:358] "Generic (PLEG): container finished" podID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerID="0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a" exitCode=0 Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.323889 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfzbg" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.323949 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfzbg" event={"ID":"57b9e245-7d13-4a38-8b88-1e23425ab322","Type":"ContainerDied","Data":"0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a"} Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.323992 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfzbg" event={"ID":"57b9e245-7d13-4a38-8b88-1e23425ab322","Type":"ContainerDied","Data":"967cb21cae878941230f907820190b9e2ca3ade54d9ac75b395075c324e712f0"} Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.340986 5120 scope.go:117] "RemoveContainer" containerID="87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.363592 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8v54"] Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.367638 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k8v54"] Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.369476 5120 scope.go:117] "RemoveContainer" containerID="e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.369654 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.371691 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.373089 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.383990 5120 scope.go:117] "RemoveContainer" containerID="47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6" Dec 08 19:31:44 crc kubenswrapper[5120]: E1208 19:31:44.385212 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6\": container with ID starting with 47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6 not found: ID does not exist" containerID="47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.385253 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6"} err="failed to get container status \"47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6\": rpc error: code = NotFound desc = could not find container \"47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6\": container with ID starting with 47b9f0c66c9906ec9cc6f86155fc5d5861ef5cbb5ea95cdae7d437e5a7aee3f6 not found: ID does not exist" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.385301 5120 scope.go:117] "RemoveContainer" containerID="87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd" Dec 08 19:31:44 crc kubenswrapper[5120]: E1208 19:31:44.388993 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd\": container with ID starting with 87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd not found: ID does not exist" containerID="87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.389043 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd"} err="failed to get container status \"87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd\": rpc error: code = NotFound desc = could not find container \"87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd\": container with ID starting with 87375da2d520e7003a77d04ff4b515181b576e9d1f480e45417db68e136d93fd not found: ID does not exist" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.389070 5120 scope.go:117] "RemoveContainer" containerID="e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118" Dec 08 19:31:44 crc kubenswrapper[5120]: E1208 19:31:44.391627 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118\": container with ID starting with e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118 not found: ID does not exist" containerID="e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.391690 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118"} err="failed to get container status \"e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118\": rpc error: code = NotFound desc = could not find container \"e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118\": container with ID starting with e4dbc93c18e118b36d8d575828233c8727353e9c32a1d13fe82e7f29d2abd118 not found: ID does not exist" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.391724 5120 scope.go:117] "RemoveContainer" containerID="0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.399452 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4q6rh\" (UniqueName: \"kubernetes.io/projected/57b9e245-7d13-4a38-8b88-1e23425ab322-kube-api-access-4q6rh\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.399479 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.413124 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57b9e245-7d13-4a38-8b88-1e23425ab322" (UID: "57b9e245-7d13-4a38-8b88-1e23425ab322"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.418010 5120 scope.go:117] "RemoveContainer" containerID="1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.442315 5120 scope.go:117] "RemoveContainer" containerID="3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.454870 5120 scope.go:117] "RemoveContainer" containerID="0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a" Dec 08 19:31:44 crc kubenswrapper[5120]: E1208 19:31:44.455265 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a\": container with ID starting with 0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a not found: ID does not exist" containerID="0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.455297 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a"} err="failed to get container status \"0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a\": rpc error: code = NotFound desc = could not find container \"0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a\": container with ID starting with 0449c26a0cdb807c346f40597bffd2b39667d10b9ee627fb61d81d1f352c673a not found: ID does not exist" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.455317 5120 scope.go:117] "RemoveContainer" containerID="1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450" Dec 08 19:31:44 crc kubenswrapper[5120]: E1208 19:31:44.455842 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450\": container with ID starting with 1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450 not found: ID does not exist" containerID="1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.455861 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450"} err="failed to get container status \"1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450\": rpc error: code = NotFound desc = could not find container \"1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450\": container with ID starting with 1ca123d652c120f45d60bcaee6f841eaed6fd80741b28a57a5a54df035b03450 not found: ID does not exist" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.455872 5120 scope.go:117] "RemoveContainer" containerID="3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb" Dec 08 19:31:44 crc kubenswrapper[5120]: E1208 19:31:44.456134 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb\": container with ID starting with 3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb not found: ID does not exist" containerID="3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.456158 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb"} err="failed to get container status \"3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb\": rpc error: code = NotFound desc = could not find container \"3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb\": container with ID starting with 3b3c582f29d913b7ae3083c41ae4e73931fadaa853a23022c808a700099551bb not found: ID does not exist" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.500407 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b9e245-7d13-4a38-8b88-1e23425ab322-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.653978 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lfzbg"] Dec 08 19:31:44 crc kubenswrapper[5120]: I1208 19:31:44.660855 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lfzbg"] Dec 08 19:31:45 crc kubenswrapper[5120]: I1208 19:31:45.441050 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-58gks"] Dec 08 19:31:45 crc kubenswrapper[5120]: I1208 19:31:45.666700 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" path="/var/lib/kubelet/pods/57b9e245-7d13-4a38-8b88-1e23425ab322/volumes" Dec 08 19:31:45 crc kubenswrapper[5120]: I1208 19:31:45.667741 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" path="/var/lib/kubelet/pods/8e09ec29-0759-4098-9957-d1a998ed478c/volumes" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.062292 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063177 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="extract-utilities" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063195 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="extract-utilities" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063203 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerName="kube-multus-additional-cni-plugins" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063209 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerName="kube-multus-additional-cni-plugins" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063218 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="extract-content" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063223 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="extract-content" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063241 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="extract-content" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063246 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="extract-content" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063257 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="extract-utilities" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063264 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="extract-utilities" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063274 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6956b13d-429c-4df6-bc64-22e987847479" containerName="pruner" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063287 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6956b13d-429c-4df6-bc64-22e987847479" containerName="pruner" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063315 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="registry-server" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063322 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="registry-server" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063332 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03c0e578-6d1b-4592-b7ac-8bfad96bc11e" containerName="pruner" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063337 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c0e578-6d1b-4592-b7ac-8bfad96bc11e" containerName="pruner" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063346 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="registry-server" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063351 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="registry-server" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063450 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="8e09ec29-0759-4098-9957-d1a998ed478c" containerName="registry-server" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063459 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6956b13d-429c-4df6-bc64-22e987847479" containerName="pruner" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063467 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d77b19d-39e0-468f-b4b4-63ec407092de" containerName="kube-multus-additional-cni-plugins" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063474 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="03c0e578-6d1b-4592-b7ac-8bfad96bc11e" containerName="pruner" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.063486 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="57b9e245-7d13-4a38-8b88-1e23425ab322" containerName="registry-server" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.704410 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.704761 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-58gks" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="registry-server" containerID="cri-o://05189ed3e33a5fcb3c9bbe36566445efe0794d42442ff7c5d0f9f0e8782a3616" gracePeriod=2 Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.704813 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.707508 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.707538 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.838651 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7n9rm"] Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.839233 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7n9rm" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="registry-server" containerID="cri-o://ee2db2eea96616dce34005712bfc119785917ca12f3514b2fd5a070baacb5725" gracePeriod=2 Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.848566 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c103f2-c07e-4943-a534-ed818bcd591f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.849515 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c103f2-c07e-4943-a534-ed818bcd591f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.951257 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c103f2-c07e-4943-a534-ed818bcd591f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.951332 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c103f2-c07e-4943-a534-ed818bcd591f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.951783 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c103f2-c07e-4943-a534-ed818bcd591f-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:47 crc kubenswrapper[5120]: I1208 19:31:47.970431 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c103f2-c07e-4943-a534-ed818bcd591f-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5120]: I1208 19:31:48.030969 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5120]: I1208 19:31:48.279018 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:31:48 crc kubenswrapper[5120]: W1208 19:31:48.289480 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod89c103f2_c07e_4943_a534_ed818bcd591f.slice/crio-12c1c9afbda2369b67cc3e7bb090a79af6b341b1e7d7f2f3d76fa785baba753c WatchSource:0}: Error finding container 12c1c9afbda2369b67cc3e7bb090a79af6b341b1e7d7f2f3d76fa785baba753c: Status 404 returned error can't find the container with id 12c1c9afbda2369b67cc3e7bb090a79af6b341b1e7d7f2f3d76fa785baba753c Dec 08 19:31:48 crc kubenswrapper[5120]: I1208 19:31:48.367972 5120 generic.go:358] "Generic (PLEG): container finished" podID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerID="05189ed3e33a5fcb3c9bbe36566445efe0794d42442ff7c5d0f9f0e8782a3616" exitCode=0 Dec 08 19:31:48 crc kubenswrapper[5120]: I1208 19:31:48.368040 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58gks" event={"ID":"bdc58252-6ea1-4687-b5eb-ed95b19e0aab","Type":"ContainerDied","Data":"05189ed3e33a5fcb3c9bbe36566445efe0794d42442ff7c5d0f9f0e8782a3616"} Dec 08 19:31:48 crc kubenswrapper[5120]: I1208 19:31:48.369195 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"89c103f2-c07e-4943-a534-ed818bcd591f","Type":"ContainerStarted","Data":"12c1c9afbda2369b67cc3e7bb090a79af6b341b1e7d7f2f3d76fa785baba753c"} Dec 08 19:31:49 crc kubenswrapper[5120]: I1208 19:31:49.376570 5120 generic.go:358] "Generic (PLEG): container finished" podID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerID="ee2db2eea96616dce34005712bfc119785917ca12f3514b2fd5a070baacb5725" exitCode=0 Dec 08 19:31:49 crc kubenswrapper[5120]: I1208 19:31:49.376628 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n9rm" event={"ID":"2e978cf7-a02c-43e6-b689-1ffffd0893a0","Type":"ContainerDied","Data":"ee2db2eea96616dce34005712bfc119785917ca12f3514b2fd5a070baacb5725"} Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.046802 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.182396 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46v9s\" (UniqueName: \"kubernetes.io/projected/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-kube-api-access-46v9s\") pod \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.182511 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-catalog-content\") pod \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.182588 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-utilities\") pod \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\" (UID: \"bdc58252-6ea1-4687-b5eb-ed95b19e0aab\") " Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.183875 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-utilities" (OuterVolumeSpecName: "utilities") pod "bdc58252-6ea1-4687-b5eb-ed95b19e0aab" (UID: "bdc58252-6ea1-4687-b5eb-ed95b19e0aab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.188331 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-kube-api-access-46v9s" (OuterVolumeSpecName: "kube-api-access-46v9s") pod "bdc58252-6ea1-4687-b5eb-ed95b19e0aab" (UID: "bdc58252-6ea1-4687-b5eb-ed95b19e0aab"). InnerVolumeSpecName "kube-api-access-46v9s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.192250 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc58252-6ea1-4687-b5eb-ed95b19e0aab" (UID: "bdc58252-6ea1-4687-b5eb-ed95b19e0aab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.283613 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-46v9s\" (UniqueName: \"kubernetes.io/projected/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-kube-api-access-46v9s\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.283657 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.283669 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc58252-6ea1-4687-b5eb-ed95b19e0aab-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.384707 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58gks" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.384701 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58gks" event={"ID":"bdc58252-6ea1-4687-b5eb-ed95b19e0aab","Type":"ContainerDied","Data":"f46b308abfacec1aa7904289588631f20a6731d1aee444966b7bdec2d7c6de26"} Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.384840 5120 scope.go:117] "RemoveContainer" containerID="05189ed3e33a5fcb3c9bbe36566445efe0794d42442ff7c5d0f9f0e8782a3616" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.387158 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"89c103f2-c07e-4943-a534-ed818bcd591f","Type":"ContainerStarted","Data":"77645e5855a663527d0030a024d9cefe9aba44f1c962d80e7ecaa8457470b968"} Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.403387 5120 scope.go:117] "RemoveContainer" containerID="2d83d6f775d20e0f358835bbb9722ec89d9f913b36b2c762be107c91ae8c0407" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.416304 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-58gks"] Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.418416 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-58gks"] Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.442371 5120 scope.go:117] "RemoveContainer" containerID="7c39f75d4bad8b6bf323ba92245fff51627ae6dc01f4ab35306693c95d1cc8c7" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.754976 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.890287 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgk7p\" (UniqueName: \"kubernetes.io/projected/2e978cf7-a02c-43e6-b689-1ffffd0893a0-kube-api-access-hgk7p\") pod \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.890456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-utilities\") pod \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.890498 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-catalog-content\") pod \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\" (UID: \"2e978cf7-a02c-43e6-b689-1ffffd0893a0\") " Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.891774 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-utilities" (OuterVolumeSpecName: "utilities") pod "2e978cf7-a02c-43e6-b689-1ffffd0893a0" (UID: "2e978cf7-a02c-43e6-b689-1ffffd0893a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.898761 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e978cf7-a02c-43e6-b689-1ffffd0893a0-kube-api-access-hgk7p" (OuterVolumeSpecName: "kube-api-access-hgk7p") pod "2e978cf7-a02c-43e6-b689-1ffffd0893a0" (UID: "2e978cf7-a02c-43e6-b689-1ffffd0893a0"). InnerVolumeSpecName "kube-api-access-hgk7p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.980978 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e978cf7-a02c-43e6-b689-1ffffd0893a0" (UID: "2e978cf7-a02c-43e6-b689-1ffffd0893a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.991896 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.991942 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e978cf7-a02c-43e6-b689-1ffffd0893a0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:50 crc kubenswrapper[5120]: I1208 19:31:50.991958 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgk7p\" (UniqueName: \"kubernetes.io/projected/2e978cf7-a02c-43e6-b689-1ffffd0893a0-kube-api-access-hgk7p\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.396299 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7n9rm" event={"ID":"2e978cf7-a02c-43e6-b689-1ffffd0893a0","Type":"ContainerDied","Data":"792e42b1636fb4d5f85cb539474a614edecf96f4cea7aa802ead035ffc6fa496"} Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.396345 5120 scope.go:117] "RemoveContainer" containerID="ee2db2eea96616dce34005712bfc119785917ca12f3514b2fd5a070baacb5725" Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.396436 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7n9rm" Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.403090 5120 generic.go:358] "Generic (PLEG): container finished" podID="89c103f2-c07e-4943-a534-ed818bcd591f" containerID="77645e5855a663527d0030a024d9cefe9aba44f1c962d80e7ecaa8457470b968" exitCode=0 Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.403132 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"89c103f2-c07e-4943-a534-ed818bcd591f","Type":"ContainerDied","Data":"77645e5855a663527d0030a024d9cefe9aba44f1c962d80e7ecaa8457470b968"} Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.420356 5120 scope.go:117] "RemoveContainer" containerID="a651081ee5190bdef92a35842edbf1b53a06a7b9062d5406548b22ebf2da53e0" Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.438451 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7n9rm"] Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.442110 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7n9rm"] Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.463051 5120 scope.go:117] "RemoveContainer" containerID="7f7d84019e850a2dfc6b52c0462b149bb6efbbd1f0cb8cbc58599cec56edfae1" Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.666558 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" path="/var/lib/kubelet/pods/2e978cf7-a02c-43e6-b689-1ffffd0893a0/volumes" Dec 08 19:31:51 crc kubenswrapper[5120]: I1208 19:31:51.667910 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" path="/var/lib/kubelet/pods/bdc58252-6ea1-4687-b5eb-ed95b19e0aab/volumes" Dec 08 19:31:52 crc kubenswrapper[5120]: I1208 19:31:52.641089 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:52 crc kubenswrapper[5120]: I1208 19:31:52.812570 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c103f2-c07e-4943-a534-ed818bcd591f-kubelet-dir\") pod \"89c103f2-c07e-4943-a534-ed818bcd591f\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " Dec 08 19:31:52 crc kubenswrapper[5120]: I1208 19:31:52.812735 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c103f2-c07e-4943-a534-ed818bcd591f-kube-api-access\") pod \"89c103f2-c07e-4943-a534-ed818bcd591f\" (UID: \"89c103f2-c07e-4943-a534-ed818bcd591f\") " Dec 08 19:31:52 crc kubenswrapper[5120]: I1208 19:31:52.812733 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c103f2-c07e-4943-a534-ed818bcd591f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89c103f2-c07e-4943-a534-ed818bcd591f" (UID: "89c103f2-c07e-4943-a534-ed818bcd591f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:52 crc kubenswrapper[5120]: I1208 19:31:52.813286 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89c103f2-c07e-4943-a534-ed818bcd591f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:52 crc kubenswrapper[5120]: I1208 19:31:52.819852 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c103f2-c07e-4943-a534-ed818bcd591f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89c103f2-c07e-4943-a534-ed818bcd591f" (UID: "89c103f2-c07e-4943-a534-ed818bcd591f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:52 crc kubenswrapper[5120]: I1208 19:31:52.914330 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c103f2-c07e-4943-a534-ed818bcd591f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:53 crc kubenswrapper[5120]: I1208 19:31:53.413721 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"89c103f2-c07e-4943-a534-ed818bcd591f","Type":"ContainerDied","Data":"12c1c9afbda2369b67cc3e7bb090a79af6b341b1e7d7f2f3d76fa785baba753c"} Dec 08 19:31:53 crc kubenswrapper[5120]: I1208 19:31:53.413973 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c1c9afbda2369b67cc3e7bb090a79af6b341b1e7d7f2f3d76fa785baba753c" Dec 08 19:31:53 crc kubenswrapper[5120]: I1208 19:31:53.413731 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.059338 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.062237 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="extract-utilities" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.062476 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="extract-utilities" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.062644 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="extract-content" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.062808 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="extract-content" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.062988 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="registry-server" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.063149 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="registry-server" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.063400 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="extract-content" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.063590 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="extract-content" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.063745 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89c103f2-c07e-4943-a534-ed818bcd591f" containerName="pruner" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.063912 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c103f2-c07e-4943-a534-ed818bcd591f" containerName="pruner" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.064080 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="registry-server" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.064304 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="registry-server" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.064504 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="extract-utilities" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.064638 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="extract-utilities" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.064966 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="89c103f2-c07e-4943-a534-ed818bcd591f" containerName="pruner" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.065095 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e978cf7-a02c-43e6-b689-1ffffd0893a0" containerName="registry-server" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.065299 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="bdc58252-6ea1-4687-b5eb-ed95b19e0aab" containerName="registry-server" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.080275 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.080526 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.083002 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.083467 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.145587 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-var-lock\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.145923 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/729b771a-4119-467c-a85c-f92449b1c88e-kube-api-access\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.146014 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.246907 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.246983 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-var-lock\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.247018 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/729b771a-4119-467c-a85c-f92449b1c88e-kube-api-access\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.247469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.247538 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-var-lock\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.270120 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/729b771a-4119-467c-a85c-f92449b1c88e-kube-api-access\") pod \"installer-12-crc\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.398833 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5120]: I1208 19:31:55.581632 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:31:56 crc kubenswrapper[5120]: I1208 19:31:56.433440 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"729b771a-4119-467c-a85c-f92449b1c88e","Type":"ContainerStarted","Data":"a318ec4d5e17abaa886bdcf1b0c63ee639b4e899e84093706ca9d06cca863f94"} Dec 08 19:31:56 crc kubenswrapper[5120]: I1208 19:31:56.433484 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"729b771a-4119-467c-a85c-f92449b1c88e","Type":"ContainerStarted","Data":"da72b31bcab0c5a44edaafe91c39976f90a2a237c2214e5d18ad4a2957ebf7c3"} Dec 08 19:31:56 crc kubenswrapper[5120]: I1208 19:31:56.454117 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.454100438 podStartE2EDuration="1.454100438s" podCreationTimestamp="2025-12-08 19:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:56.451368153 +0000 UTC m=+169.123474822" watchObservedRunningTime="2025-12-08 19:31:56.454100438 +0000 UTC m=+169.126207087" Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.269944 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.618939 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" podUID="74731f18-a532-487c-b679-3d850acf1edd" containerName="oauth-openshift" containerID="cri-o://d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96" gracePeriod=15 Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.954512 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.986054 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69f649d946-xzhlr"] Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.986609 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74731f18-a532-487c-b679-3d850acf1edd" containerName="oauth-openshift" Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.986632 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="74731f18-a532-487c-b679-3d850acf1edd" containerName="oauth-openshift" Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.986766 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="74731f18-a532-487c-b679-3d850acf1edd" containerName="oauth-openshift" Dec 08 19:32:06 crc kubenswrapper[5120]: I1208 19:32:06.990129 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.002369 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69f649d946-xzhlr"] Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004453 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-router-certs\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004497 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-cliconfig\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004517 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-trusted-ca-bundle\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004563 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74731f18-a532-487c-b679-3d850acf1edd-audit-dir\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004582 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-idp-0-file-data\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004622 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-audit-policies\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004655 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-provider-selection\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004695 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5swdk\" (UniqueName: \"kubernetes.io/projected/74731f18-a532-487c-b679-3d850acf1edd-kube-api-access-5swdk\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004738 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-ocp-branding-template\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004787 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-serving-cert\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004806 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-session\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004836 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-login\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004860 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-service-ca\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.004901 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-error\") pod \"74731f18-a532-487c-b679-3d850acf1edd\" (UID: \"74731f18-a532-487c-b679-3d850acf1edd\") " Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.006417 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.006606 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.006664 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74731f18-a532-487c-b679-3d850acf1edd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.007262 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.010955 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74731f18-a532-487c-b679-3d850acf1edd-kube-api-access-5swdk" (OuterVolumeSpecName: "kube-api-access-5swdk") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "kube-api-access-5swdk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.011063 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.011523 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.011827 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.012081 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.012776 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.021135 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.021774 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.025527 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.029596 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "74731f18-a532-487c-b679-3d850acf1edd" (UID: "74731f18-a532-487c-b679-3d850acf1edd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106045 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106102 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-error\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106230 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-service-ca\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106265 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-audit-policies\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106341 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106387 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106407 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106479 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f84rs\" (UniqueName: \"kubernetes.io/projected/cea8594c-12d4-4226-93de-f2795461c304-kube-api-access-f84rs\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106510 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-session\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.106904 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cea8594c-12d4-4226-93de-f2795461c304-audit-dir\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107017 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107036 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-router-certs\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107054 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-login\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107107 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107119 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107128 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5swdk\" (UniqueName: \"kubernetes.io/projected/74731f18-a532-487c-b679-3d850acf1edd-kube-api-access-5swdk\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107980 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.107996 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108072 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108085 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108099 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108112 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108126 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108137 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108148 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108177 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74731f18-a532-487c-b679-3d850acf1edd-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.108193 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74731f18-a532-487c-b679-3d850acf1edd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.209629 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-service-ca\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.209754 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-audit-policies\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.209849 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.209927 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210462 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210555 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210641 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f84rs\" (UniqueName: \"kubernetes.io/projected/cea8594c-12d4-4226-93de-f2795461c304-kube-api-access-f84rs\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210683 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-session\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210789 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cea8594c-12d4-4226-93de-f2795461c304-audit-dir\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210909 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-audit-policies\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210941 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210999 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-router-certs\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.211107 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-login\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.211257 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.211360 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-error\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.210722 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-service-ca\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.212000 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cea8594c-12d4-4226-93de-f2795461c304-audit-dir\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.212151 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.213099 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.214531 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-session\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.215630 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.217129 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-login\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.217478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.219249 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-template-error\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.219479 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.220014 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-router-certs\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.220311 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cea8594c-12d4-4226-93de-f2795461c304-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.228135 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f84rs\" (UniqueName: \"kubernetes.io/projected/cea8594c-12d4-4226-93de-f2795461c304-kube-api-access-f84rs\") pod \"oauth-openshift-69f649d946-xzhlr\" (UID: \"cea8594c-12d4-4226-93de-f2795461c304\") " pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.371256 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.501013 5120 generic.go:358] "Generic (PLEG): container finished" podID="74731f18-a532-487c-b679-3d850acf1edd" containerID="d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96" exitCode=0 Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.501111 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.501128 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" event={"ID":"74731f18-a532-487c-b679-3d850acf1edd","Type":"ContainerDied","Data":"d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96"} Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.501625 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-rbgvm" event={"ID":"74731f18-a532-487c-b679-3d850acf1edd","Type":"ContainerDied","Data":"8f5014b94aba870f63941b2dd3be46fca327b48c5e15a909dadfae5e288c60b2"} Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.501653 5120 scope.go:117] "RemoveContainer" containerID="d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.537970 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-rbgvm"] Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.540886 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-rbgvm"] Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.543661 5120 scope.go:117] "RemoveContainer" containerID="d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96" Dec 08 19:32:07 crc kubenswrapper[5120]: E1208 19:32:07.544232 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96\": container with ID starting with d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96 not found: ID does not exist" containerID="d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.544266 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96"} err="failed to get container status \"d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96\": rpc error: code = NotFound desc = could not find container \"d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96\": container with ID starting with d9799016f1bdac71092e36bbe41f277acca8070b4ffec6eff2892843f102fa96 not found: ID does not exist" Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.620248 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69f649d946-xzhlr"] Dec 08 19:32:07 crc kubenswrapper[5120]: I1208 19:32:07.672322 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74731f18-a532-487c-b679-3d850acf1edd" path="/var/lib/kubelet/pods/74731f18-a532-487c-b679-3d850acf1edd/volumes" Dec 08 19:32:08 crc kubenswrapper[5120]: I1208 19:32:08.510131 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" event={"ID":"cea8594c-12d4-4226-93de-f2795461c304","Type":"ContainerStarted","Data":"d8e313636d7b7596541b4213030ebd8adc9dd9310df028216af7b4c961e5ecf2"} Dec 08 19:32:08 crc kubenswrapper[5120]: I1208 19:32:08.510436 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" event={"ID":"cea8594c-12d4-4226-93de-f2795461c304","Type":"ContainerStarted","Data":"4586a959bb1ff9fa555a98e04ef3a6efa490d8fbcfa38f692ffdaaa63759ee50"} Dec 08 19:32:08 crc kubenswrapper[5120]: I1208 19:32:08.510990 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:08 crc kubenswrapper[5120]: I1208 19:32:08.591238 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" podStartSLOduration=27.591215318 podStartE2EDuration="27.591215318s" podCreationTimestamp="2025-12-08 19:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:32:08.590746004 +0000 UTC m=+181.262852713" watchObservedRunningTime="2025-12-08 19:32:08.591215318 +0000 UTC m=+181.263321987" Dec 08 19:32:08 crc kubenswrapper[5120]: I1208 19:32:08.765554 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69f649d946-xzhlr" Dec 08 19:32:24 crc kubenswrapper[5120]: I1208 19:32:24.963872 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35872: no serving certificate available for the kubelet" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.939109 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.940636 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.940761 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.940812 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.940964 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.941096 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.941119 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944249 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944281 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944307 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944318 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944335 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944346 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944361 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944374 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944397 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944408 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944427 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944439 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944459 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944470 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944499 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944511 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944542 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944554 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944706 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944735 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944747 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944761 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944776 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944795 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944812 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944966 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.944979 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.945248 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.945589 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.971537 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 19:32:33 crc kubenswrapper[5120]: I1208 19:32:33.979457 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.009952 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: E1208 19:32:34.011000 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038085 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038191 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038222 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038268 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038303 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038323 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038348 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038375 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038402 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.038438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.139997 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140376 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140395 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140414 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140318 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140480 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140529 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140547 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140598 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140631 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140691 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140839 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140992 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.140991 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.141015 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.141017 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.316084 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: E1208 19:32:34.340668 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f5460f3ff3b19 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:32:34.339707673 +0000 UTC m=+207.011814322,LastTimestamp:2025-12-08 19:32:34.339707673 +0000 UTC m=+207.011814322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:32:34 crc kubenswrapper[5120]: E1208 19:32:34.475690 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f5460f3ff3b19 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:32:34.339707673 +0000 UTC m=+207.011814322,LastTimestamp:2025-12-08 19:32:34.339707673 +0000 UTC m=+207.011814322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.525771 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.525872 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.679284 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.680496 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.681074 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.681094 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.681100 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.681107 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab" exitCode=2 Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.681239 5120 scope.go:117] "RemoveContainer" containerID="5aed853de61060125a3f2ef905a2001f7a5b9673416008aea06c07b56acd94ac" Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.682688 5120 generic.go:358] "Generic (PLEG): container finished" podID="729b771a-4119-467c-a85c-f92449b1c88e" containerID="a318ec4d5e17abaa886bdcf1b0c63ee639b4e899e84093706ca9d06cca863f94" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.682718 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"729b771a-4119-467c-a85c-f92449b1c88e","Type":"ContainerDied","Data":"a318ec4d5e17abaa886bdcf1b0c63ee639b4e899e84093706ca9d06cca863f94"} Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.684075 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3"} Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.684107 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"c888fea82c8e78f4020d46163e069549c28843c7fc1c239c1d7106aad7a266a1"} Dec 08 19:32:34 crc kubenswrapper[5120]: I1208 19:32:34.684362 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:34 crc kubenswrapper[5120]: E1208 19:32:34.684819 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:35 crc kubenswrapper[5120]: I1208 19:32:35.692771 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:35 crc kubenswrapper[5120]: I1208 19:32:35.973026 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.066047 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/729b771a-4119-467c-a85c-f92449b1c88e-kube-api-access\") pod \"729b771a-4119-467c-a85c-f92449b1c88e\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.066214 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-kubelet-dir\") pod \"729b771a-4119-467c-a85c-f92449b1c88e\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.066314 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-var-lock\") pod \"729b771a-4119-467c-a85c-f92449b1c88e\" (UID: \"729b771a-4119-467c-a85c-f92449b1c88e\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.066396 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "729b771a-4119-467c-a85c-f92449b1c88e" (UID: "729b771a-4119-467c-a85c-f92449b1c88e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.066699 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.066778 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-var-lock" (OuterVolumeSpecName: "var-lock") pod "729b771a-4119-467c-a85c-f92449b1c88e" (UID: "729b771a-4119-467c-a85c-f92449b1c88e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.097750 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729b771a-4119-467c-a85c-f92449b1c88e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "729b771a-4119-467c-a85c-f92449b1c88e" (UID: "729b771a-4119-467c-a85c-f92449b1c88e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.168131 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/729b771a-4119-467c-a85c-f92449b1c88e-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.168199 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/729b771a-4119-467c-a85c-f92449b1c88e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.338664 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.339370 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.370806 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.370867 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.370913 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.371012 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.371052 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.371312 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.371394 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.371630 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.372102 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.372806 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.372853 5120 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.372866 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.372878 5120 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.375541 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.475506 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.702225 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"729b771a-4119-467c-a85c-f92449b1c88e","Type":"ContainerDied","Data":"da72b31bcab0c5a44edaafe91c39976f90a2a237c2214e5d18ad4a2957ebf7c3"} Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.702305 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da72b31bcab0c5a44edaafe91c39976f90a2a237c2214e5d18ad4a2957ebf7c3" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.702342 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.707443 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.708521 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b" exitCode=0 Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.708707 5120 scope.go:117] "RemoveContainer" containerID="73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.708887 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.729800 5120 scope.go:117] "RemoveContainer" containerID="b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.745642 5120 scope.go:117] "RemoveContainer" containerID="32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.765206 5120 scope.go:117] "RemoveContainer" containerID="99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.785403 5120 scope.go:117] "RemoveContainer" containerID="ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.806918 5120 scope.go:117] "RemoveContainer" containerID="6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.873397 5120 scope.go:117] "RemoveContainer" containerID="73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810" Dec 08 19:32:36 crc kubenswrapper[5120]: E1208 19:32:36.874555 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810\": container with ID starting with 73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810 not found: ID does not exist" containerID="73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.874629 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810"} err="failed to get container status \"73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810\": rpc error: code = NotFound desc = could not find container \"73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810\": container with ID starting with 73c2e6c9331d10661e2dc4fb1db20fe77364b8fa912824dc91416aede8958810 not found: ID does not exist" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.874655 5120 scope.go:117] "RemoveContainer" containerID="b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a" Dec 08 19:32:36 crc kubenswrapper[5120]: E1208 19:32:36.875061 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\": container with ID starting with b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a not found: ID does not exist" containerID="b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.875094 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a"} err="failed to get container status \"b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\": rpc error: code = NotFound desc = could not find container \"b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a\": container with ID starting with b4292a14e1dd2390eb78dd91447367dc02303f979f76b6bb847fdc7c1595c11a not found: ID does not exist" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.875111 5120 scope.go:117] "RemoveContainer" containerID="32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900" Dec 08 19:32:36 crc kubenswrapper[5120]: E1208 19:32:36.875396 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\": container with ID starting with 32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900 not found: ID does not exist" containerID="32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.875430 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900"} err="failed to get container status \"32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\": rpc error: code = NotFound desc = could not find container \"32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900\": container with ID starting with 32c5241c3843e0f3be923b9687093705a299651c1dc246c03699873043a11900 not found: ID does not exist" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.875448 5120 scope.go:117] "RemoveContainer" containerID="99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab" Dec 08 19:32:36 crc kubenswrapper[5120]: E1208 19:32:36.875703 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\": container with ID starting with 99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab not found: ID does not exist" containerID="99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.875733 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab"} err="failed to get container status \"99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\": rpc error: code = NotFound desc = could not find container \"99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab\": container with ID starting with 99f7825f84f88507411daa365c556eda444b43dd46acde5742a28183b8c04fab not found: ID does not exist" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.875749 5120 scope.go:117] "RemoveContainer" containerID="ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b" Dec 08 19:32:36 crc kubenswrapper[5120]: E1208 19:32:36.876050 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\": container with ID starting with ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b not found: ID does not exist" containerID="ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.876076 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b"} err="failed to get container status \"ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\": rpc error: code = NotFound desc = could not find container \"ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b\": container with ID starting with ef07eea3653fd6f207f156dd401b35f3837d4f837cd7f9d550c3de8a1178c39b not found: ID does not exist" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.876094 5120 scope.go:117] "RemoveContainer" containerID="6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb" Dec 08 19:32:36 crc kubenswrapper[5120]: E1208 19:32:36.876340 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\": container with ID starting with 6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb not found: ID does not exist" containerID="6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb" Dec 08 19:32:36 crc kubenswrapper[5120]: I1208 19:32:36.876366 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb"} err="failed to get container status \"6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\": rpc error: code = NotFound desc = could not find container \"6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb\": container with ID starting with 6b11a907b04d5d6f75cbc7662dca90fc0b14b25dad5eb0ba31a7224a9046abdb not found: ID does not exist" Dec 08 19:32:37 crc kubenswrapper[5120]: I1208 19:32:37.666334 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 19:32:38 crc kubenswrapper[5120]: I1208 19:32:38.983198 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:38 crc kubenswrapper[5120]: I1208 19:32:38.989359 5120 status_manager.go:895] "Failed to get status for pod" podUID="729b771a-4119-467c-a85c-f92449b1c88e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:38 crc kubenswrapper[5120]: I1208 19:32:38.989818 5120 status_manager.go:895] "Failed to get status for pod" podUID="729b771a-4119-467c-a85c-f92449b1c88e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.229027 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.229587 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.230048 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.230789 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.231201 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:42 crc kubenswrapper[5120]: I1208 19:32:42.231256 5120 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.231660 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="200ms" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.432057 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="400ms" Dec 08 19:32:42 crc kubenswrapper[5120]: E1208 19:32:42.833416 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="800ms" Dec 08 19:32:43 crc kubenswrapper[5120]: E1208 19:32:43.635996 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="1.6s" Dec 08 19:32:44 crc kubenswrapper[5120]: E1208 19:32:44.477210 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f5460f3ff3b19 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:32:34.339707673 +0000 UTC m=+207.011814322,LastTimestamp:2025-12-08 19:32:34.339707673 +0000 UTC m=+207.011814322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:32:45 crc kubenswrapper[5120]: E1208 19:32:45.237238 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="3.2s" Dec 08 19:32:45 crc kubenswrapper[5120]: I1208 19:32:45.659306 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:45 crc kubenswrapper[5120]: I1208 19:32:45.660046 5120 status_manager.go:895] "Failed to get status for pod" podUID="729b771a-4119-467c-a85c-f92449b1c88e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:45 crc kubenswrapper[5120]: I1208 19:32:45.680628 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:45 crc kubenswrapper[5120]: I1208 19:32:45.680670 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:45 crc kubenswrapper[5120]: E1208 19:32:45.680927 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:45 crc kubenswrapper[5120]: I1208 19:32:45.681228 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:45 crc kubenswrapper[5120]: I1208 19:32:45.760929 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a2b24507f7bde718a15ee34294e0549787635043dfb755d67653cea3b96a4a5d"} Dec 08 19:32:46 crc kubenswrapper[5120]: I1208 19:32:46.768364 5120 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="21a3d29a1a7adad12d060655bf5aae3be575b980b309a67bb494e0f88b247406" exitCode=0 Dec 08 19:32:46 crc kubenswrapper[5120]: I1208 19:32:46.768452 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"21a3d29a1a7adad12d060655bf5aae3be575b980b309a67bb494e0f88b247406"} Dec 08 19:32:46 crc kubenswrapper[5120]: I1208 19:32:46.768880 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:46 crc kubenswrapper[5120]: I1208 19:32:46.768908 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:46 crc kubenswrapper[5120]: I1208 19:32:46.769368 5120 status_manager.go:895] "Failed to get status for pod" podUID="729b771a-4119-467c-a85c-f92449b1c88e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Dec 08 19:32:46 crc kubenswrapper[5120]: E1208 19:32:46.769531 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5120]: I1208 19:32:47.781216 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"749a173decb7e59febde27c2c22a2f75cd809a5f3427db471df3af4892c55ea8"} Dec 08 19:32:47 crc kubenswrapper[5120]: I1208 19:32:47.781275 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3ae8d0c3e3b16b6b709fdcf75fe2b5c98168b0e21790a7712b81fbf4af661fe1"} Dec 08 19:32:47 crc kubenswrapper[5120]: I1208 19:32:47.781299 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e114840bcd886364f26a779d60465b32602be944cd6cb43e1955d5d74fa5467a"} Dec 08 19:32:47 crc kubenswrapper[5120]: I1208 19:32:47.781313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5c9ea1574f53e09a98d6a6de95e72e0c3584430959a18c71de3d457512bd4f4c"} Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.791354 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"36c908789f0920de2aff7ff0ec11cc347a2e8943f48a61eee3d116a4e25d63ef"} Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.791635 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.791558 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.791655 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.793951 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.793983 5120 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac" exitCode=1 Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.794047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac"} Dec 08 19:32:48 crc kubenswrapper[5120]: I1208 19:32:48.794488 5120 scope.go:117] "RemoveContainer" containerID="ecb6fe31d7c8e0b5d5f97b398c343b2792772e3d2bfd85e860b87b2b276d2fac" Dec 08 19:32:49 crc kubenswrapper[5120]: I1208 19:32:49.804001 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:32:49 crc kubenswrapper[5120]: I1208 19:32:49.804611 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1f6c2753f27792c0e04e2bfe6b22521023a9907ac02b54d2f1d1894e095e1bde"} Dec 08 19:32:50 crc kubenswrapper[5120]: I1208 19:32:50.682196 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:50 crc kubenswrapper[5120]: I1208 19:32:50.683100 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:50 crc kubenswrapper[5120]: I1208 19:32:50.687961 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.034825 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.035459 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.800910 5120 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.800965 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.826820 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.826854 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.832898 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:53 crc kubenswrapper[5120]: I1208 19:32:53.835317 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="aa464b05-1d0d-4574-b9c8-53123b0a1e5e" Dec 08 19:32:54 crc kubenswrapper[5120]: I1208 19:32:54.830280 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:54 crc kubenswrapper[5120]: I1208 19:32:54.831035 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57c7e94e-3b5e-467b-83ad-227b41850996" Dec 08 19:32:57 crc kubenswrapper[5120]: I1208 19:32:57.673189 5120 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 19:32:57 crc kubenswrapper[5120]: I1208 19:32:57.673538 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 19:32:57 crc kubenswrapper[5120]: I1208 19:32:57.675888 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:32:57 crc kubenswrapper[5120]: I1208 19:32:57.680073 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="aa464b05-1d0d-4574-b9c8-53123b0a1e5e" Dec 08 19:32:57 crc kubenswrapper[5120]: I1208 19:32:57.755567 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:04 crc kubenswrapper[5120]: I1208 19:33:04.457766 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 19:33:04 crc kubenswrapper[5120]: I1208 19:33:04.848531 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 19:33:04 crc kubenswrapper[5120]: I1208 19:33:04.954380 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 19:33:05 crc kubenswrapper[5120]: I1208 19:33:05.062520 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 19:33:05 crc kubenswrapper[5120]: I1208 19:33:05.078199 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 19:33:05 crc kubenswrapper[5120]: I1208 19:33:05.118472 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:05 crc kubenswrapper[5120]: I1208 19:33:05.920395 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 19:33:06 crc kubenswrapper[5120]: I1208 19:33:06.158472 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:06 crc kubenswrapper[5120]: I1208 19:33:06.304576 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 19:33:06 crc kubenswrapper[5120]: I1208 19:33:06.342642 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:33:06 crc kubenswrapper[5120]: I1208 19:33:06.635996 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:33:06 crc kubenswrapper[5120]: I1208 19:33:06.914774 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.286811 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.311330 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.347330 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.439412 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.605239 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.668639 5120 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.668725 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.737682 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.853900 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:07 crc kubenswrapper[5120]: I1208 19:33:07.983231 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.055319 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.061838 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.205932 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.355301 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.404046 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.515665 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.556728 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.577500 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.615128 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.775866 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.776079 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.776654 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.807466 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.813921 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.877510 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.931846 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.965373 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 19:33:08 crc kubenswrapper[5120]: I1208 19:33:08.966681 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.057420 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.062717 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.156219 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.185505 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.410609 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.456122 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.456571 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.459041 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.465197 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.492714 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.498003 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.671443 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.740529 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.777508 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.824712 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.917306 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5120]: I1208 19:33:09.944089 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.048449 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.050966 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.076407 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.078089 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.116533 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.268359 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.270773 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.314490 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.329378 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.407045 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.425214 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.447481 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.475533 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.477961 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.490364 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.560125 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.598572 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.631371 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.657892 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.813021 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.862687 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5120]: I1208 19:33:10.924108 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.113433 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.153539 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.301928 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.417715 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.422207 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.428811 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.433349 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.433435 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.438204 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.459114 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.460877 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.460859166 podStartE2EDuration="18.460859166s" podCreationTimestamp="2025-12-08 19:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:11.457580443 +0000 UTC m=+244.129687112" watchObservedRunningTime="2025-12-08 19:33:11.460859166 +0000 UTC m=+244.132965815" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.492889 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.558972 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.573889 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.620758 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.698958 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.699553 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.834708 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 19:33:11 crc kubenswrapper[5120]: I1208 19:33:11.838160 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.049711 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.074669 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.079060 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.184666 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.189347 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.197850 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.321724 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.350473 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.370823 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.446611 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.490517 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.588853 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.636136 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.637030 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.687334 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.710387 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.726834 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 19:33:12 crc kubenswrapper[5120]: I1208 19:33:12.958268 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.122230 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.128915 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.179586 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.258493 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.488699 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.494572 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.570711 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.574104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.595859 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.638610 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.648413 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.649545 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.681915 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.721184 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.725350 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.777514 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.898805 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.958484 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.975876 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 19:33:13 crc kubenswrapper[5120]: I1208 19:33:13.980200 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.045281 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.196219 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.231387 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.263759 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.308705 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.317066 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.373956 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.477851 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.484707 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.495575 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.600227 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.656947 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.678456 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.727927 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.757627 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.811256 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.814986 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.890333 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.897755 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.898157 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.914569 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.950376 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5120]: I1208 19:33:14.973218 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.003118 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.102246 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.172620 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.208662 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.250926 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.396728 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.595049 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.645403 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.654854 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.665368 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.871411 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.930622 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.945140 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.991150 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.991752 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3" gracePeriod=5 Dec 08 19:33:15 crc kubenswrapper[5120]: I1208 19:33:15.998766 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:15.999982 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.013324 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.046656 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.048068 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.146517 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.162389 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.177340 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.191012 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.324643 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.404837 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.450106 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.459190 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.525326 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.595662 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.664982 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.836397 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.847807 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:33:16 crc kubenswrapper[5120]: I1208 19:33:16.929581 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.123181 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.127981 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.173198 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.216056 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.277794 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.392394 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.408228 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.472904 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.498753 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.524565 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.536115 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.558111 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.673616 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.689286 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.691576 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.698079 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.718866 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.768549 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.865187 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.872585 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5120]: I1208 19:33:17.985277 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.027126 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.081247 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.101066 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.123138 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.135289 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.185581 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.190258 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.227211 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.242626 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.246898 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.257199 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.280855 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.312453 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.336892 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.385439 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.390765 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.407367 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.421324 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.454617 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.464268 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.580277 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.626627 5120 ???:1] "http: TLS handshake error from 192.168.126.11:37376: no serving certificate available for the kubelet" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.721565 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.750762 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 19:33:18 crc kubenswrapper[5120]: I1208 19:33:18.953589 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.041103 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.134709 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.162497 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.178520 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.300524 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.323854 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.333401 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.502479 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.504201 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.712189 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 19:33:19 crc kubenswrapper[5120]: I1208 19:33:19.849905 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.117421 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.118067 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.146631 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.161424 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.362708 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.428117 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.548462 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.675227 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.800575 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.816653 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5120]: I1208 19:33:20.958956 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.360738 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.385133 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.590253 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.590399 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.592888 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.690932 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691152 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691312 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691324 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691365 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691390 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691469 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691573 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691813 5120 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691835 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691850 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.691868 5120 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.702487 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.793836 5120 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:21 crc kubenswrapper[5120]: I1208 19:33:21.980412 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.004217 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.004273 5120 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3" exitCode=137 Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.004361 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.004369 5120 scope.go:117] "RemoveContainer" containerID="4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.007715 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.023460 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.034304 5120 scope.go:117] "RemoveContainer" containerID="4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3" Dec 08 19:33:22 crc kubenswrapper[5120]: E1208 19:33:22.034961 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3\": container with ID starting with 4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3 not found: ID does not exist" containerID="4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.034993 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3"} err="failed to get container status \"4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3\": rpc error: code = NotFound desc = could not find container \"4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3\": container with ID starting with 4aa7e6a901bef225e70e371cff98e8821c9d3869339ea41e0f16d346be651aa3 not found: ID does not exist" Dec 08 19:33:22 crc kubenswrapper[5120]: I1208 19:33:22.046417 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 19:33:23 crc kubenswrapper[5120]: I1208 19:33:23.034573 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:33:23 crc kubenswrapper[5120]: I1208 19:33:23.034919 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:33:23 crc kubenswrapper[5120]: I1208 19:33:23.387662 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 19:33:23 crc kubenswrapper[5120]: I1208 19:33:23.669194 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 19:33:39 crc kubenswrapper[5120]: I1208 19:33:39.109639 5120 generic.go:358] "Generic (PLEG): container finished" podID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerID="ee5d8822946afce0e50863b70cdf99e51f5a6d874e36ddd1e885cbd28a0110ee" exitCode=0 Dec 08 19:33:39 crc kubenswrapper[5120]: I1208 19:33:39.109787 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" event={"ID":"76f020ff-36ee-4661-a02f-9fb3f5a504ac","Type":"ContainerDied","Data":"ee5d8822946afce0e50863b70cdf99e51f5a6d874e36ddd1e885cbd28a0110ee"} Dec 08 19:33:39 crc kubenswrapper[5120]: I1208 19:33:39.110937 5120 scope.go:117] "RemoveContainer" containerID="ee5d8822946afce0e50863b70cdf99e51f5a6d874e36ddd1e885cbd28a0110ee" Dec 08 19:33:39 crc kubenswrapper[5120]: I1208 19:33:39.666723 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:33:40 crc kubenswrapper[5120]: I1208 19:33:40.117601 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" event={"ID":"76f020ff-36ee-4661-a02f-9fb3f5a504ac","Type":"ContainerStarted","Data":"dba2ef8917137455a64d108c1811e9d3799f6e31ad946254fc55d76c8c51c821"} Dec 08 19:33:40 crc kubenswrapper[5120]: I1208 19:33:40.117996 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:33:40 crc kubenswrapper[5120]: I1208 19:33:40.119751 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:33:46 crc kubenswrapper[5120]: I1208 19:33:46.908829 5120 ???:1] "http: TLS handshake error from 192.168.126.11:60780: no serving certificate available for the kubelet" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.538397 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-bmg84"] Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.538716 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" podUID="7d6c9e9e-2924-4940-baca-0d24615c9513" containerName="controller-manager" containerID="cri-o://45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2" gracePeriod=30 Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.552953 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k"] Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.553302 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" podUID="8976d94f-0a56-417c-9460-885a2d7f0155" containerName="route-controller-manager" containerID="cri-o://53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c" gracePeriod=30 Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.935022 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.939604 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.964328 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c47cd87bb-c54b9"] Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965030 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d6c9e9e-2924-4940-baca-0d24615c9513" containerName="controller-manager" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965057 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6c9e9e-2924-4940-baca-0d24615c9513" containerName="controller-manager" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965077 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="729b771a-4119-467c-a85c-f92449b1c88e" containerName="installer" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965084 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="729b771a-4119-467c-a85c-f92449b1c88e" containerName="installer" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965093 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8976d94f-0a56-417c-9460-885a2d7f0155" containerName="route-controller-manager" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965104 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8976d94f-0a56-417c-9460-885a2d7f0155" containerName="route-controller-manager" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965115 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965121 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965272 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="729b771a-4119-467c-a85c-f92449b1c88e" containerName="installer" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965288 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965301 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d6c9e9e-2924-4940-baca-0d24615c9513" containerName="controller-manager" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.965310 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="8976d94f-0a56-417c-9460-885a2d7f0155" containerName="route-controller-manager" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.971935 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.989987 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c47cd87bb-c54b9"] Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.994114 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql"] Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.997824 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql"] Dec 08 19:33:47 crc kubenswrapper[5120]: I1208 19:33:47.997962 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066404 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-client-ca\") pod \"7d6c9e9e-2924-4940-baca-0d24615c9513\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8976d94f-0a56-417c-9460-885a2d7f0155-tmp\") pod \"8976d94f-0a56-417c-9460-885a2d7f0155\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066482 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hb2w\" (UniqueName: \"kubernetes.io/projected/7d6c9e9e-2924-4940-baca-0d24615c9513-kube-api-access-2hb2w\") pod \"7d6c9e9e-2924-4940-baca-0d24615c9513\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066506 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6c9e9e-2924-4940-baca-0d24615c9513-serving-cert\") pod \"7d6c9e9e-2924-4940-baca-0d24615c9513\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066523 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-client-ca\") pod \"8976d94f-0a56-417c-9460-885a2d7f0155\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066536 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8976d94f-0a56-417c-9460-885a2d7f0155-serving-cert\") pod \"8976d94f-0a56-417c-9460-885a2d7f0155\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066941 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d6c9e9e-2924-4940-baca-0d24615c9513-tmp\") pod \"7d6c9e9e-2924-4940-baca-0d24615c9513\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.066999 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8976d94f-0a56-417c-9460-885a2d7f0155-tmp" (OuterVolumeSpecName: "tmp") pod "8976d94f-0a56-417c-9460-885a2d7f0155" (UID: "8976d94f-0a56-417c-9460-885a2d7f0155"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067029 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-proxy-ca-bundles\") pod \"7d6c9e9e-2924-4940-baca-0d24615c9513\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067064 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-config\") pod \"8976d94f-0a56-417c-9460-885a2d7f0155\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067126 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l59h\" (UniqueName: \"kubernetes.io/projected/8976d94f-0a56-417c-9460-885a2d7f0155-kube-api-access-4l59h\") pod \"8976d94f-0a56-417c-9460-885a2d7f0155\" (UID: \"8976d94f-0a56-417c-9460-885a2d7f0155\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067308 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-config\") pod \"7d6c9e9e-2924-4940-baca-0d24615c9513\" (UID: \"7d6c9e9e-2924-4940-baca-0d24615c9513\") " Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067332 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6c9e9e-2924-4940-baca-0d24615c9513-tmp" (OuterVolumeSpecName: "tmp") pod "7d6c9e9e-2924-4940-baca-0d24615c9513" (UID: "7d6c9e9e-2924-4940-baca-0d24615c9513"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067527 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-proxy-ca-bundles\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdzw\" (UniqueName: \"kubernetes.io/projected/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-kube-api-access-qhdzw\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067673 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-client-ca\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067680 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-config" (OuterVolumeSpecName: "config") pod "8976d94f-0a56-417c-9460-885a2d7f0155" (UID: "8976d94f-0a56-417c-9460-885a2d7f0155"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067723 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-client-ca" (OuterVolumeSpecName: "client-ca") pod "8976d94f-0a56-417c-9460-885a2d7f0155" (UID: "8976d94f-0a56-417c-9460-885a2d7f0155"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067780 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-client-ca\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067801 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d6c9e9e-2924-4940-baca-0d24615c9513" (UID: "7d6c9e9e-2924-4940-baca-0d24615c9513"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067864 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vfrt\" (UniqueName: \"kubernetes.io/projected/8c71db43-3165-47ca-a66b-8769dbbdb401-kube-api-access-6vfrt\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.067883 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-serving-cert\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068103 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-config\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068121 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-tmp\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068136 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8c71db43-3165-47ca-a66b-8769dbbdb401-tmp\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068270 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-config\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068297 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c71db43-3165-47ca-a66b-8769dbbdb401-serving-cert\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068383 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068397 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8976d94f-0a56-417c-9460-885a2d7f0155-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068405 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068413 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d6c9e9e-2924-4940-baca-0d24615c9513-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068474 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8976d94f-0a56-417c-9460-885a2d7f0155-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068642 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d6c9e9e-2924-4940-baca-0d24615c9513" (UID: "7d6c9e9e-2924-4940-baca-0d24615c9513"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.068868 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-config" (OuterVolumeSpecName: "config") pod "7d6c9e9e-2924-4940-baca-0d24615c9513" (UID: "7d6c9e9e-2924-4940-baca-0d24615c9513"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.072320 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8976d94f-0a56-417c-9460-885a2d7f0155-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8976d94f-0a56-417c-9460-885a2d7f0155" (UID: "8976d94f-0a56-417c-9460-885a2d7f0155"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.072421 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8976d94f-0a56-417c-9460-885a2d7f0155-kube-api-access-4l59h" (OuterVolumeSpecName: "kube-api-access-4l59h") pod "8976d94f-0a56-417c-9460-885a2d7f0155" (UID: "8976d94f-0a56-417c-9460-885a2d7f0155"). InnerVolumeSpecName "kube-api-access-4l59h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.072564 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6c9e9e-2924-4940-baca-0d24615c9513-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d6c9e9e-2924-4940-baca-0d24615c9513" (UID: "7d6c9e9e-2924-4940-baca-0d24615c9513"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.072681 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6c9e9e-2924-4940-baca-0d24615c9513-kube-api-access-2hb2w" (OuterVolumeSpecName: "kube-api-access-2hb2w") pod "7d6c9e9e-2924-4940-baca-0d24615c9513" (UID: "7d6c9e9e-2924-4940-baca-0d24615c9513"). InnerVolumeSpecName "kube-api-access-2hb2w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170183 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-client-ca\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170238 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vfrt\" (UniqueName: \"kubernetes.io/projected/8c71db43-3165-47ca-a66b-8769dbbdb401-kube-api-access-6vfrt\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-serving-cert\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170321 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-config\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170388 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-tmp\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170426 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8c71db43-3165-47ca-a66b-8769dbbdb401-tmp\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170460 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-config\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170477 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c71db43-3165-47ca-a66b-8769dbbdb401-serving-cert\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170533 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-proxy-ca-bundles\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170585 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdzw\" (UniqueName: \"kubernetes.io/projected/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-kube-api-access-qhdzw\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170617 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-client-ca\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170635 5120 generic.go:358] "Generic (PLEG): container finished" podID="8976d94f-0a56-417c-9460-885a2d7f0155" containerID="53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c" exitCode=0 Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170686 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170702 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2hb2w\" (UniqueName: \"kubernetes.io/projected/7d6c9e9e-2924-4940-baca-0d24615c9513-kube-api-access-2hb2w\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170711 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6c9e9e-2924-4940-baca-0d24615c9513-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170738 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8976d94f-0a56-417c-9460-885a2d7f0155-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170749 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d6c9e9e-2924-4940-baca-0d24615c9513-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170761 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4l59h\" (UniqueName: \"kubernetes.io/projected/8976d94f-0a56-417c-9460-885a2d7f0155-kube-api-access-4l59h\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170859 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" event={"ID":"8976d94f-0a56-417c-9460-885a2d7f0155","Type":"ContainerDied","Data":"53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c"} Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" event={"ID":"8976d94f-0a56-417c-9460-885a2d7f0155","Type":"ContainerDied","Data":"713b7e33f849fc0c43e4bbd41b1be657fcdd67d20a56332ea8da8b3184315517"} Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170921 5120 scope.go:117] "RemoveContainer" containerID="53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.170940 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.171309 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-client-ca\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.172543 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-tmp\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.173216 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-client-ca\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.173246 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8c71db43-3165-47ca-a66b-8769dbbdb401-tmp\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.173318 5120 generic.go:358] "Generic (PLEG): container finished" podID="7d6c9e9e-2924-4940-baca-0d24615c9513" containerID="45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2" exitCode=0 Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.173429 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.173458 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" event={"ID":"7d6c9e9e-2924-4940-baca-0d24615c9513","Type":"ContainerDied","Data":"45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2"} Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.173519 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-bmg84" event={"ID":"7d6c9e9e-2924-4940-baca-0d24615c9513","Type":"ContainerDied","Data":"7fb37c35f38964808a528f44a97a10880bcedf41ce47b0b7bcde0330989f29ed"} Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.173776 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-config\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.174213 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-config\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.174611 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-proxy-ca-bundles\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.179549 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c71db43-3165-47ca-a66b-8769dbbdb401-serving-cert\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.179819 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-serving-cert\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.186578 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdzw\" (UniqueName: \"kubernetes.io/projected/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-kube-api-access-qhdzw\") pod \"controller-manager-c47cd87bb-c54b9\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.187184 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vfrt\" (UniqueName: \"kubernetes.io/projected/8c71db43-3165-47ca-a66b-8769dbbdb401-kube-api-access-6vfrt\") pod \"route-controller-manager-57dc88565b-nq9ql\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.193964 5120 scope.go:117] "RemoveContainer" containerID="53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c" Dec 08 19:33:48 crc kubenswrapper[5120]: E1208 19:33:48.194427 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c\": container with ID starting with 53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c not found: ID does not exist" containerID="53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.194462 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c"} err="failed to get container status \"53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c\": rpc error: code = NotFound desc = could not find container \"53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c\": container with ID starting with 53b21c4d7f7e6277dbb191018ab23060027f36b0f752b9da41a4e13521d50d2c not found: ID does not exist" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.194484 5120 scope.go:117] "RemoveContainer" containerID="45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.206729 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-bmg84"] Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.212138 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-bmg84"] Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.214147 5120 scope.go:117] "RemoveContainer" containerID="45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2" Dec 08 19:33:48 crc kubenswrapper[5120]: E1208 19:33:48.214874 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2\": container with ID starting with 45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2 not found: ID does not exist" containerID="45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.214916 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2"} err="failed to get container status \"45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2\": rpc error: code = NotFound desc = could not find container \"45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2\": container with ID starting with 45ef337e2a543472fde437865e828362f6b3532f8c0f0f4563db03df6a57c5f2 not found: ID does not exist" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.220136 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k"] Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.223439 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-hqj5k"] Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.292903 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.313661 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.558941 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c47cd87bb-c54b9"] Dec 08 19:33:48 crc kubenswrapper[5120]: I1208 19:33:48.592477 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql"] Dec 08 19:33:48 crc kubenswrapper[5120]: W1208 19:33:48.594296 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c71db43_3165_47ca_a66b_8769dbbdb401.slice/crio-d5cf8572379bbd1d2a7cdfced5178b458bd09d023315d740893b616d29d4e6c3 WatchSource:0}: Error finding container d5cf8572379bbd1d2a7cdfced5178b458bd09d023315d740893b616d29d4e6c3: Status 404 returned error can't find the container with id d5cf8572379bbd1d2a7cdfced5178b458bd09d023315d740893b616d29d4e6c3 Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.179270 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" event={"ID":"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5","Type":"ContainerStarted","Data":"3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f"} Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.179614 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" event={"ID":"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5","Type":"ContainerStarted","Data":"515afcfc25b6d8dc0cb6fe1632a602e7738a39c0d078752863b1150abb452664"} Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.179635 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.185118 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.186698 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" event={"ID":"8c71db43-3165-47ca-a66b-8769dbbdb401","Type":"ContainerStarted","Data":"b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da"} Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.186852 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" event={"ID":"8c71db43-3165-47ca-a66b-8769dbbdb401","Type":"ContainerStarted","Data":"d5cf8572379bbd1d2a7cdfced5178b458bd09d023315d740893b616d29d4e6c3"} Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.186944 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.224755 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" podStartSLOduration=2.2247380310000002 podStartE2EDuration="2.224738031s" podCreationTimestamp="2025-12-08 19:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:49.204561029 +0000 UTC m=+281.876667698" watchObservedRunningTime="2025-12-08 19:33:49.224738031 +0000 UTC m=+281.896844680" Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.225752 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" podStartSLOduration=2.225743323 podStartE2EDuration="2.225743323s" podCreationTimestamp="2025-12-08 19:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:49.225426153 +0000 UTC m=+281.897532822" watchObservedRunningTime="2025-12-08 19:33:49.225743323 +0000 UTC m=+281.897849972" Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.651619 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.668875 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6c9e9e-2924-4940-baca-0d24615c9513" path="/var/lib/kubelet/pods/7d6c9e9e-2924-4940-baca-0d24615c9513/volumes" Dec 08 19:33:49 crc kubenswrapper[5120]: I1208 19:33:49.669580 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8976d94f-0a56-417c-9460-885a2d7f0155" path="/var/lib/kubelet/pods/8976d94f-0a56-417c-9460-885a2d7f0155/volumes" Dec 08 19:33:53 crc kubenswrapper[5120]: I1208 19:33:53.035429 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:33:53 crc kubenswrapper[5120]: I1208 19:33:53.035540 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:33:53 crc kubenswrapper[5120]: I1208 19:33:53.035624 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:33:53 crc kubenswrapper[5120]: I1208 19:33:53.036745 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9"} pod="openshift-machine-config-operator/machine-config-daemon-5j87q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:33:53 crc kubenswrapper[5120]: I1208 19:33:53.036881 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" containerID="cri-o://4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9" gracePeriod=600 Dec 08 19:33:53 crc kubenswrapper[5120]: I1208 19:33:53.215846 5120 generic.go:358] "Generic (PLEG): container finished" podID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerID="4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9" exitCode=0 Dec 08 19:33:53 crc kubenswrapper[5120]: I1208 19:33:53.215949 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerDied","Data":"4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9"} Dec 08 19:33:54 crc kubenswrapper[5120]: I1208 19:33:54.222481 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"3f88d551f42032419a1d26a0feb9fd83716bdd3cc03c6791e6a2e5891078890a"} Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.539374 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql"] Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.541284 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" podUID="8c71db43-3165-47ca-a66b-8769dbbdb401" containerName="route-controller-manager" containerID="cri-o://b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da" gracePeriod=30 Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.802695 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.809713 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.821431 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.825499 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.920683 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.974064 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2"] Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.974863 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8c71db43-3165-47ca-a66b-8769dbbdb401" containerName="route-controller-manager" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.974893 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c71db43-3165-47ca-a66b-8769dbbdb401" containerName="route-controller-manager" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.975017 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="8c71db43-3165-47ca-a66b-8769dbbdb401" containerName="route-controller-manager" Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.981986 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2"] Dec 08 19:34:07 crc kubenswrapper[5120]: I1208 19:34:07.982110 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.061460 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c71db43-3165-47ca-a66b-8769dbbdb401-serving-cert\") pod \"8c71db43-3165-47ca-a66b-8769dbbdb401\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.061613 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vfrt\" (UniqueName: \"kubernetes.io/projected/8c71db43-3165-47ca-a66b-8769dbbdb401-kube-api-access-6vfrt\") pod \"8c71db43-3165-47ca-a66b-8769dbbdb401\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.061676 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-config\") pod \"8c71db43-3165-47ca-a66b-8769dbbdb401\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.061751 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-client-ca\") pod \"8c71db43-3165-47ca-a66b-8769dbbdb401\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.061773 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8c71db43-3165-47ca-a66b-8769dbbdb401-tmp\") pod \"8c71db43-3165-47ca-a66b-8769dbbdb401\" (UID: \"8c71db43-3165-47ca-a66b-8769dbbdb401\") " Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.061928 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-serving-cert\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.061952 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcm2b\" (UniqueName: \"kubernetes.io/projected/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-kube-api-access-lcm2b\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.062055 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-tmp\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.062085 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-client-ca\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.062197 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-config\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.062398 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c71db43-3165-47ca-a66b-8769dbbdb401-tmp" (OuterVolumeSpecName: "tmp") pod "8c71db43-3165-47ca-a66b-8769dbbdb401" (UID: "8c71db43-3165-47ca-a66b-8769dbbdb401"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.062468 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c71db43-3165-47ca-a66b-8769dbbdb401" (UID: "8c71db43-3165-47ca-a66b-8769dbbdb401"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.062504 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-config" (OuterVolumeSpecName: "config") pod "8c71db43-3165-47ca-a66b-8769dbbdb401" (UID: "8c71db43-3165-47ca-a66b-8769dbbdb401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.071509 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c71db43-3165-47ca-a66b-8769dbbdb401-kube-api-access-6vfrt" (OuterVolumeSpecName: "kube-api-access-6vfrt") pod "8c71db43-3165-47ca-a66b-8769dbbdb401" (UID: "8c71db43-3165-47ca-a66b-8769dbbdb401"). InnerVolumeSpecName "kube-api-access-6vfrt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.071654 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c71db43-3165-47ca-a66b-8769dbbdb401-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c71db43-3165-47ca-a66b-8769dbbdb401" (UID: "8c71db43-3165-47ca-a66b-8769dbbdb401"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.163814 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-tmp\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.164145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-client-ca\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.164282 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-config\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.164402 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-tmp\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.164570 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-serving-cert\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.164706 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcm2b\" (UniqueName: \"kubernetes.io/projected/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-kube-api-access-lcm2b\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.165058 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-client-ca\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.165344 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.165465 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8c71db43-3165-47ca-a66b-8769dbbdb401-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.165545 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c71db43-3165-47ca-a66b-8769dbbdb401-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.165612 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vfrt\" (UniqueName: \"kubernetes.io/projected/8c71db43-3165-47ca-a66b-8769dbbdb401-kube-api-access-6vfrt\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.165689 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c71db43-3165-47ca-a66b-8769dbbdb401-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.165624 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-config\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.168594 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-serving-cert\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.181270 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcm2b\" (UniqueName: \"kubernetes.io/projected/3f97c116-94de-4b6e-ac52-f7dc26c1d99a-kube-api-access-lcm2b\") pod \"route-controller-manager-686b56797d-6sdm2\" (UID: \"3f97c116-94de-4b6e-ac52-f7dc26c1d99a\") " pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.301922 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.306270 5120 generic.go:358] "Generic (PLEG): container finished" podID="8c71db43-3165-47ca-a66b-8769dbbdb401" containerID="b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da" exitCode=0 Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.306495 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.306520 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" event={"ID":"8c71db43-3165-47ca-a66b-8769dbbdb401","Type":"ContainerDied","Data":"b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da"} Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.307295 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql" event={"ID":"8c71db43-3165-47ca-a66b-8769dbbdb401","Type":"ContainerDied","Data":"d5cf8572379bbd1d2a7cdfced5178b458bd09d023315d740893b616d29d4e6c3"} Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.307363 5120 scope.go:117] "RemoveContainer" containerID="b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.331808 5120 scope.go:117] "RemoveContainer" containerID="b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da" Dec 08 19:34:08 crc kubenswrapper[5120]: E1208 19:34:08.332694 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da\": container with ID starting with b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da not found: ID does not exist" containerID="b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.332768 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da"} err="failed to get container status \"b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da\": rpc error: code = NotFound desc = could not find container \"b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da\": container with ID starting with b6f19fafb603287617b11de89f87b9fc750fd5c42d83c03178b4b51c7a8858da not found: ID does not exist" Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.360005 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql"] Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.367150 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dc88565b-nq9ql"] Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.707524 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2"] Dec 08 19:34:08 crc kubenswrapper[5120]: I1208 19:34:08.712970 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:34:09 crc kubenswrapper[5120]: I1208 19:34:09.314357 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" event={"ID":"3f97c116-94de-4b6e-ac52-f7dc26c1d99a","Type":"ContainerStarted","Data":"e247e091b7546e662d0c4e84d9686c7d553f8e3ea657570abd7f8768a3dcab88"} Dec 08 19:34:09 crc kubenswrapper[5120]: I1208 19:34:09.314703 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" event={"ID":"3f97c116-94de-4b6e-ac52-f7dc26c1d99a","Type":"ContainerStarted","Data":"3fee15ca6b16e9c7f01e6d94168572e2dd4b7f89153715fd3dab811d15673e6b"} Dec 08 19:34:09 crc kubenswrapper[5120]: I1208 19:34:09.314729 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:09 crc kubenswrapper[5120]: I1208 19:34:09.667884 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c71db43-3165-47ca-a66b-8769dbbdb401" path="/var/lib/kubelet/pods/8c71db43-3165-47ca-a66b-8769dbbdb401/volumes" Dec 08 19:34:09 crc kubenswrapper[5120]: I1208 19:34:09.897745 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" Dec 08 19:34:09 crc kubenswrapper[5120]: I1208 19:34:09.925902 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-686b56797d-6sdm2" podStartSLOduration=2.9258705320000002 podStartE2EDuration="2.925870532s" podCreationTimestamp="2025-12-08 19:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:09.341710887 +0000 UTC m=+302.013817556" watchObservedRunningTime="2025-12-08 19:34:09.925870532 +0000 UTC m=+302.597977221" Dec 08 19:34:28 crc kubenswrapper[5120]: I1208 19:34:28.756442 5120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:34:47 crc kubenswrapper[5120]: I1208 19:34:47.538485 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c47cd87bb-c54b9"] Dec 08 19:34:47 crc kubenswrapper[5120]: I1208 19:34:47.539309 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" podUID="0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" containerName="controller-manager" containerID="cri-o://3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f" gracePeriod=30 Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.018765 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.066393 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf"] Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.066890 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" containerName="controller-manager" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.066906 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" containerName="controller-manager" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.067002 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" containerName="controller-manager" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.072393 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.075142 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf"] Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108544 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhdzw\" (UniqueName: \"kubernetes.io/projected/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-kube-api-access-qhdzw\") pod \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108627 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-proxy-ca-bundles\") pod \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108681 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-serving-cert\") pod \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108735 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-tmp\") pod \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108770 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-client-ca\") pod \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108793 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-config\") pod \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\" (UID: \"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5\") " Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108893 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvcn2\" (UniqueName: \"kubernetes.io/projected/f8be846b-5808-4474-b163-89504d88579a-kube-api-access-wvcn2\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108929 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-client-ca\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108946 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-proxy-ca-bundles\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108964 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8be846b-5808-4474-b163-89504d88579a-tmp\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.108988 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8be846b-5808-4474-b163-89504d88579a-serving-cert\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.109024 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-config\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.109338 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" (UID: "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.109387 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" (UID: "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.109577 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-tmp" (OuterVolumeSpecName: "tmp") pod "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" (UID: "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.109588 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-config" (OuterVolumeSpecName: "config") pod "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" (UID: "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.119340 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-kube-api-access-qhdzw" (OuterVolumeSpecName: "kube-api-access-qhdzw") pod "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" (UID: "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5"). InnerVolumeSpecName "kube-api-access-qhdzw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.119373 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" (UID: "0d30b108-ccc9-49fe-8ac7-eb0a003bcba5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.209806 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvcn2\" (UniqueName: \"kubernetes.io/projected/f8be846b-5808-4474-b163-89504d88579a-kube-api-access-wvcn2\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.209864 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-client-ca\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.209884 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-proxy-ca-bundles\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.209900 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8be846b-5808-4474-b163-89504d88579a-tmp\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.209921 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8be846b-5808-4474-b163-89504d88579a-serving-cert\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.209956 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-config\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.210005 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.210017 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qhdzw\" (UniqueName: \"kubernetes.io/projected/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-kube-api-access-qhdzw\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.210027 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.210034 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.210042 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.210050 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.211360 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-proxy-ca-bundles\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.211417 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-config\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.211683 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8be846b-5808-4474-b163-89504d88579a-tmp\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.212312 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8be846b-5808-4474-b163-89504d88579a-client-ca\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.215951 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8be846b-5808-4474-b163-89504d88579a-serving-cert\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.230901 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvcn2\" (UniqueName: \"kubernetes.io/projected/f8be846b-5808-4474-b163-89504d88579a-kube-api-access-wvcn2\") pod \"controller-manager-6d7c96cbd4-7szhf\" (UID: \"f8be846b-5808-4474-b163-89504d88579a\") " pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.405832 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.564644 5120 generic.go:358] "Generic (PLEG): container finished" podID="0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" containerID="3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f" exitCode=0 Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.564845 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" event={"ID":"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5","Type":"ContainerDied","Data":"3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f"} Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.564996 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" event={"ID":"0d30b108-ccc9-49fe-8ac7-eb0a003bcba5","Type":"ContainerDied","Data":"515afcfc25b6d8dc0cb6fe1632a602e7738a39c0d078752863b1150abb452664"} Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.565019 5120 scope.go:117] "RemoveContainer" containerID="3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.564939 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c47cd87bb-c54b9" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.595719 5120 scope.go:117] "RemoveContainer" containerID="3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.599316 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c47cd87bb-c54b9"] Dec 08 19:34:48 crc kubenswrapper[5120]: E1208 19:34:48.602035 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f\": container with ID starting with 3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f not found: ID does not exist" containerID="3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.602077 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f"} err="failed to get container status \"3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f\": rpc error: code = NotFound desc = could not find container \"3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f\": container with ID starting with 3c24759ddc01eb3986d16b81db47da3c74e325b0bf41945faad264a281a7e43f not found: ID does not exist" Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.602925 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c47cd87bb-c54b9"] Dec 08 19:34:48 crc kubenswrapper[5120]: I1208 19:34:48.611108 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf"] Dec 08 19:34:48 crc kubenswrapper[5120]: W1208 19:34:48.618810 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8be846b_5808_4474_b163_89504d88579a.slice/crio-243301882a34a182354e75a0a42506a4df1b47469a183b38ee583cae15d649d7 WatchSource:0}: Error finding container 243301882a34a182354e75a0a42506a4df1b47469a183b38ee583cae15d649d7: Status 404 returned error can't find the container with id 243301882a34a182354e75a0a42506a4df1b47469a183b38ee583cae15d649d7 Dec 08 19:34:49 crc kubenswrapper[5120]: I1208 19:34:49.574549 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" event={"ID":"f8be846b-5808-4474-b163-89504d88579a","Type":"ContainerStarted","Data":"892278da246410e6e419d61ce59e4bbc0f4250feca07e4a276abb62319141415"} Dec 08 19:34:49 crc kubenswrapper[5120]: I1208 19:34:49.574948 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" event={"ID":"f8be846b-5808-4474-b163-89504d88579a","Type":"ContainerStarted","Data":"243301882a34a182354e75a0a42506a4df1b47469a183b38ee583cae15d649d7"} Dec 08 19:34:49 crc kubenswrapper[5120]: I1208 19:34:49.575423 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:49 crc kubenswrapper[5120]: I1208 19:34:49.581545 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" Dec 08 19:34:49 crc kubenswrapper[5120]: I1208 19:34:49.593442 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d7c96cbd4-7szhf" podStartSLOduration=2.593421812 podStartE2EDuration="2.593421812s" podCreationTimestamp="2025-12-08 19:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:49.591537441 +0000 UTC m=+342.263644120" watchObservedRunningTime="2025-12-08 19:34:49.593421812 +0000 UTC m=+342.265528461" Dec 08 19:34:49 crc kubenswrapper[5120]: I1208 19:34:49.674299 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d30b108-ccc9-49fe-8ac7-eb0a003bcba5" path="/var/lib/kubelet/pods/0d30b108-ccc9-49fe-8ac7-eb0a003bcba5/volumes" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.159703 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5czqn"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.160424 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5czqn" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="registry-server" containerID="cri-o://2d60d9c588c4f80451d6bc26fb93726419d0ae8b715d83cbd78af8f264a644a4" gracePeriod=30 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.165071 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wv6j"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.165673 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wv6j" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="registry-server" containerID="cri-o://7c2adb36c19be4c7094db31684c4fd2f20aa98a4a06e95467dbe133ec0868723" gracePeriod=30 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.173350 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htdxf"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.173592 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" containerID="cri-o://dba2ef8917137455a64d108c1811e9d3799f6e31ad946254fc55d76c8c51c821" gracePeriod=30 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.195385 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfcr5"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.195764 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bfcr5" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="registry-server" containerID="cri-o://cdbcf959715feacd9fbbc73bce688eb4c212422fbb3f88cd62a670934aad65e3" gracePeriod=30 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.205533 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-p9zms"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.257112 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r42pt"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.257179 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-p9zms"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.257344 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.257697 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r42pt" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="registry-server" containerID="cri-o://daa75d2c262dbd2c8847d5b9282f050fb3c2abf9af688f8668fad1ca6e95319c" gracePeriod=30 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.386042 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d252605f-0abb-460e-8361-8184a38b1a74-tmp\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.386461 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d252605f-0abb-460e-8361-8184a38b1a74-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.386529 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtw6\" (UniqueName: \"kubernetes.io/projected/d252605f-0abb-460e-8361-8184a38b1a74-kube-api-access-wdtw6\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.386554 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d252605f-0abb-460e-8361-8184a38b1a74-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.487845 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d252605f-0abb-460e-8361-8184a38b1a74-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.487913 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdtw6\" (UniqueName: \"kubernetes.io/projected/d252605f-0abb-460e-8361-8184a38b1a74-kube-api-access-wdtw6\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.487950 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d252605f-0abb-460e-8361-8184a38b1a74-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.488000 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d252605f-0abb-460e-8361-8184a38b1a74-tmp\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.488708 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d252605f-0abb-460e-8361-8184a38b1a74-tmp\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.489776 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d252605f-0abb-460e-8361-8184a38b1a74-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.495555 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d252605f-0abb-460e-8361-8184a38b1a74-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.509063 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdtw6\" (UniqueName: \"kubernetes.io/projected/d252605f-0abb-460e-8361-8184a38b1a74-kube-api-access-wdtw6\") pod \"marketplace-operator-547dbd544d-p9zms\" (UID: \"d252605f-0abb-460e-8361-8184a38b1a74\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.601905 5120 generic.go:358] "Generic (PLEG): container finished" podID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerID="dba2ef8917137455a64d108c1811e9d3799f6e31ad946254fc55d76c8c51c821" exitCode=0 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.602016 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" event={"ID":"76f020ff-36ee-4661-a02f-9fb3f5a504ac","Type":"ContainerDied","Data":"dba2ef8917137455a64d108c1811e9d3799f6e31ad946254fc55d76c8c51c821"} Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.602053 5120 scope.go:117] "RemoveContainer" containerID="ee5d8822946afce0e50863b70cdf99e51f5a6d874e36ddd1e885cbd28a0110ee" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.614922 5120 generic.go:358] "Generic (PLEG): container finished" podID="8646edae-915b-459b-b385-491aaf3939ec" containerID="2d60d9c588c4f80451d6bc26fb93726419d0ae8b715d83cbd78af8f264a644a4" exitCode=0 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.615032 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5czqn" event={"ID":"8646edae-915b-459b-b385-491aaf3939ec","Type":"ContainerDied","Data":"2d60d9c588c4f80451d6bc26fb93726419d0ae8b715d83cbd78af8f264a644a4"} Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.617896 5120 generic.go:358] "Generic (PLEG): container finished" podID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerID="daa75d2c262dbd2c8847d5b9282f050fb3c2abf9af688f8668fad1ca6e95319c" exitCode=0 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.617953 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r42pt" event={"ID":"7dddc5e3-08fc-4488-aec8-6920e4ff05ed","Type":"ContainerDied","Data":"daa75d2c262dbd2c8847d5b9282f050fb3c2abf9af688f8668fad1ca6e95319c"} Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.621564 5120 generic.go:358] "Generic (PLEG): container finished" podID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerID="cdbcf959715feacd9fbbc73bce688eb4c212422fbb3f88cd62a670934aad65e3" exitCode=0 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.621648 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfcr5" event={"ID":"b216d4b8-5d23-462c-9fbc-bce0c620a83a","Type":"ContainerDied","Data":"cdbcf959715feacd9fbbc73bce688eb4c212422fbb3f88cd62a670934aad65e3"} Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.624569 5120 generic.go:358] "Generic (PLEG): container finished" podID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerID="7c2adb36c19be4c7094db31684c4fd2f20aa98a4a06e95467dbe133ec0868723" exitCode=0 Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.624655 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wv6j" event={"ID":"65e44f89-0e9d-46f9-a56b-7f01d1090930","Type":"ContainerDied","Data":"7c2adb36c19be4c7094db31684c4fd2f20aa98a4a06e95467dbe133ec0868723"} Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.624675 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wv6j" event={"ID":"65e44f89-0e9d-46f9-a56b-7f01d1090930","Type":"ContainerDied","Data":"221119f59cf04cd3f498eae9c61b113787cb109da6c5da590dbfc9ce43757bac"} Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.624687 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="221119f59cf04cd3f498eae9c61b113787cb109da6c5da590dbfc9ce43757bac" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.637715 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.638777 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.649255 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.666466 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.779581 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.783485 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793556 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-catalog-content\") pod \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793634 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-catalog-content\") pod \"65e44f89-0e9d-46f9-a56b-7f01d1090930\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793660 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-utilities\") pod \"8646edae-915b-459b-b385-491aaf3939ec\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793714 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-utilities\") pod \"65e44f89-0e9d-46f9-a56b-7f01d1090930\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793732 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-catalog-content\") pod \"8646edae-915b-459b-b385-491aaf3939ec\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793760 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xvxc\" (UniqueName: \"kubernetes.io/projected/65e44f89-0e9d-46f9-a56b-7f01d1090930-kube-api-access-9xvxc\") pod \"65e44f89-0e9d-46f9-a56b-7f01d1090930\" (UID: \"65e44f89-0e9d-46f9-a56b-7f01d1090930\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793788 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wjck\" (UniqueName: \"kubernetes.io/projected/8646edae-915b-459b-b385-491aaf3939ec-kube-api-access-4wjck\") pod \"8646edae-915b-459b-b385-491aaf3939ec\" (UID: \"8646edae-915b-459b-b385-491aaf3939ec\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793807 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-utilities\") pod \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.793825 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4qqm\" (UniqueName: \"kubernetes.io/projected/b216d4b8-5d23-462c-9fbc-bce0c620a83a-kube-api-access-h4qqm\") pod \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\" (UID: \"b216d4b8-5d23-462c-9fbc-bce0c620a83a\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.800614 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-utilities" (OuterVolumeSpecName: "utilities") pod "8646edae-915b-459b-b385-491aaf3939ec" (UID: "8646edae-915b-459b-b385-491aaf3939ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.802052 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b216d4b8-5d23-462c-9fbc-bce0c620a83a-kube-api-access-h4qqm" (OuterVolumeSpecName: "kube-api-access-h4qqm") pod "b216d4b8-5d23-462c-9fbc-bce0c620a83a" (UID: "b216d4b8-5d23-462c-9fbc-bce0c620a83a"). InnerVolumeSpecName "kube-api-access-h4qqm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.803081 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-utilities" (OuterVolumeSpecName: "utilities") pod "65e44f89-0e9d-46f9-a56b-7f01d1090930" (UID: "65e44f89-0e9d-46f9-a56b-7f01d1090930"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.808182 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8646edae-915b-459b-b385-491aaf3939ec-kube-api-access-4wjck" (OuterVolumeSpecName: "kube-api-access-4wjck") pod "8646edae-915b-459b-b385-491aaf3939ec" (UID: "8646edae-915b-459b-b385-491aaf3939ec"). InnerVolumeSpecName "kube-api-access-4wjck". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.809219 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b216d4b8-5d23-462c-9fbc-bce0c620a83a" (UID: "b216d4b8-5d23-462c-9fbc-bce0c620a83a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.809936 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-utilities" (OuterVolumeSpecName: "utilities") pod "b216d4b8-5d23-462c-9fbc-bce0c620a83a" (UID: "b216d4b8-5d23-462c-9fbc-bce0c620a83a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.835634 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e44f89-0e9d-46f9-a56b-7f01d1090930-kube-api-access-9xvxc" (OuterVolumeSpecName: "kube-api-access-9xvxc") pod "65e44f89-0e9d-46f9-a56b-7f01d1090930" (UID: "65e44f89-0e9d-46f9-a56b-7f01d1090930"). InnerVolumeSpecName "kube-api-access-9xvxc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.846420 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8646edae-915b-459b-b385-491aaf3939ec" (UID: "8646edae-915b-459b-b385-491aaf3939ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.864247 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65e44f89-0e9d-46f9-a56b-7f01d1090930" (UID: "65e44f89-0e9d-46f9-a56b-7f01d1090930"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895244 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-operator-metrics\") pod \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895339 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-catalog-content\") pod \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895417 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76297\" (UniqueName: \"kubernetes.io/projected/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-kube-api-access-76297\") pod \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895457 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8jwz\" (UniqueName: \"kubernetes.io/projected/76f020ff-36ee-4661-a02f-9fb3f5a504ac-kube-api-access-j8jwz\") pod \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895490 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-trusted-ca\") pod \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895519 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76f020ff-36ee-4661-a02f-9fb3f5a504ac-tmp\") pod \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\" (UID: \"76f020ff-36ee-4661-a02f-9fb3f5a504ac\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895539 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-utilities\") pod \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\" (UID: \"7dddc5e3-08fc-4488-aec8-6920e4ff05ed\") " Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895705 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895727 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895741 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xvxc\" (UniqueName: \"kubernetes.io/projected/65e44f89-0e9d-46f9-a56b-7f01d1090930-kube-api-access-9xvxc\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895753 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wjck\" (UniqueName: \"kubernetes.io/projected/8646edae-915b-459b-b385-491aaf3939ec-kube-api-access-4wjck\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895764 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895774 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4qqm\" (UniqueName: \"kubernetes.io/projected/b216d4b8-5d23-462c-9fbc-bce0c620a83a-kube-api-access-h4qqm\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895786 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b216d4b8-5d23-462c-9fbc-bce0c620a83a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895796 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e44f89-0e9d-46f9-a56b-7f01d1090930-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.895804 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8646edae-915b-459b-b385-491aaf3939ec-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.896670 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-utilities" (OuterVolumeSpecName: "utilities") pod "7dddc5e3-08fc-4488-aec8-6920e4ff05ed" (UID: "7dddc5e3-08fc-4488-aec8-6920e4ff05ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.897543 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76f020ff-36ee-4661-a02f-9fb3f5a504ac-tmp" (OuterVolumeSpecName: "tmp") pod "76f020ff-36ee-4661-a02f-9fb3f5a504ac" (UID: "76f020ff-36ee-4661-a02f-9fb3f5a504ac"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.898323 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "76f020ff-36ee-4661-a02f-9fb3f5a504ac" (UID: "76f020ff-36ee-4661-a02f-9fb3f5a504ac"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.899776 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-p9zms"] Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.900089 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f020ff-36ee-4661-a02f-9fb3f5a504ac-kube-api-access-j8jwz" (OuterVolumeSpecName: "kube-api-access-j8jwz") pod "76f020ff-36ee-4661-a02f-9fb3f5a504ac" (UID: "76f020ff-36ee-4661-a02f-9fb3f5a504ac"). InnerVolumeSpecName "kube-api-access-j8jwz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.900330 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-kube-api-access-76297" (OuterVolumeSpecName: "kube-api-access-76297") pod "7dddc5e3-08fc-4488-aec8-6920e4ff05ed" (UID: "7dddc5e3-08fc-4488-aec8-6920e4ff05ed"). InnerVolumeSpecName "kube-api-access-76297". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.900678 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "76f020ff-36ee-4661-a02f-9fb3f5a504ac" (UID: "76f020ff-36ee-4661-a02f-9fb3f5a504ac"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.994183 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dddc5e3-08fc-4488-aec8-6920e4ff05ed" (UID: "7dddc5e3-08fc-4488-aec8-6920e4ff05ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.997046 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.997068 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/76f020ff-36ee-4661-a02f-9fb3f5a504ac-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.997078 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.997086 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76f020ff-36ee-4661-a02f-9fb3f5a504ac-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.997096 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.997105 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-76297\" (UniqueName: \"kubernetes.io/projected/7dddc5e3-08fc-4488-aec8-6920e4ff05ed-kube-api-access-76297\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:52 crc kubenswrapper[5120]: I1208 19:34:52.997113 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j8jwz\" (UniqueName: \"kubernetes.io/projected/76f020ff-36ee-4661-a02f-9fb3f5a504ac-kube-api-access-j8jwz\") on node \"crc\" DevicePath \"\"" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.644092 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5czqn" event={"ID":"8646edae-915b-459b-b385-491aaf3939ec","Type":"ContainerDied","Data":"347e33352f282a4d3eb231380c1c26a52b6d99f5d98dbf43335816aa350699f4"} Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.644320 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5czqn" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.644551 5120 scope.go:117] "RemoveContainer" containerID="2d60d9c588c4f80451d6bc26fb93726419d0ae8b715d83cbd78af8f264a644a4" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.648116 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r42pt" event={"ID":"7dddc5e3-08fc-4488-aec8-6920e4ff05ed","Type":"ContainerDied","Data":"0894d75775895253009b21961e20048338b1e89891fc949a157fee74df4bf496"} Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.648194 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r42pt" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.657489 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfcr5" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.657505 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfcr5" event={"ID":"b216d4b8-5d23-462c-9fbc-bce0c620a83a","Type":"ContainerDied","Data":"acf2d2d834656d2770734c49e64e95c5299c95631547ad5b92399b2be8ed3c0e"} Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.684478 5120 scope.go:117] "RemoveContainer" containerID="f9b63c6e8b1da64494d8bcc3ba13c52db4ab87a615234c54b583042ac638c8b4" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.684687 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wv6j" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.686248 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.690328 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" event={"ID":"d252605f-0abb-460e-8361-8184a38b1a74","Type":"ContainerStarted","Data":"1e0c0f681cb334be3a5e8beba07ae163d9f17b8b23e6bbcfd2278a48f6d84b31"} Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.690371 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" event={"ID":"d252605f-0abb-460e-8361-8184a38b1a74","Type":"ContainerStarted","Data":"56a8a2d50388224b6e359a4e1fc938d23cfe9fa6227064d639facd9f62d52c0f"} Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.690387 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-htdxf" event={"ID":"76f020ff-36ee-4661-a02f-9fb3f5a504ac","Type":"ContainerDied","Data":"d898a2f2eb938ce5a399ed56fafa813920980502b67f73639b4faf774c629ab2"} Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.690408 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.690447 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.695428 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5czqn"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.705898 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5czqn"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.709981 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-p9zms" podStartSLOduration=1.709959537 podStartE2EDuration="1.709959537s" podCreationTimestamp="2025-12-08 19:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:34:53.703435281 +0000 UTC m=+346.375541950" watchObservedRunningTime="2025-12-08 19:34:53.709959537 +0000 UTC m=+346.382066226" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.719639 5120 scope.go:117] "RemoveContainer" containerID="ef0d6eeaddefba954e622716fa0751a09176da1ac7c43737ac521fd6b38c5d13" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.756640 5120 scope.go:117] "RemoveContainer" containerID="daa75d2c262dbd2c8847d5b9282f050fb3c2abf9af688f8668fad1ca6e95319c" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.764026 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfcr5"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.774558 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfcr5"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.784002 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r42pt"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.794698 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r42pt"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.797815 5120 scope.go:117] "RemoveContainer" containerID="7c146e0e0f3153c74e2989ad42ecedcb541ad8efbf71b647fd520c4e524bb5d5" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.799489 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wv6j"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.802728 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wv6j"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.805933 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htdxf"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.809019 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-htdxf"] Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.824767 5120 scope.go:117] "RemoveContainer" containerID="234c609a54b29b9300fb52784b1cae19482bc30bdb2447f2e719e2d69509b719" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.838231 5120 scope.go:117] "RemoveContainer" containerID="cdbcf959715feacd9fbbc73bce688eb4c212422fbb3f88cd62a670934aad65e3" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.857889 5120 scope.go:117] "RemoveContainer" containerID="9f01c812bf70c97a712e042e2a74d6da3615aeb88aaf5996ad63aadb99398fad" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.870033 5120 scope.go:117] "RemoveContainer" containerID="31b9fe3a1e8038a00557fb573b5339f460dfb44950be4c4182d6e10de22d2aa6" Dec 08 19:34:53 crc kubenswrapper[5120]: I1208 19:34:53.880579 5120 scope.go:117] "RemoveContainer" containerID="dba2ef8917137455a64d108c1811e9d3799f6e31ad946254fc55d76c8c51c821" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979196 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lk8zb"] Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979712 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979723 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979732 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979737 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979746 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979752 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979764 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979770 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979778 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979783 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979792 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979797 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979804 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979809 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979815 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979821 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979836 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979842 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979850 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979855 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979863 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979868 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979875 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979880 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="extract-utilities" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979887 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979892 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="extract-content" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979901 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979906 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979977 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979988 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.979994 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="8646edae-915b-459b-b385-491aaf3939ec" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.980002 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.980013 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" containerName="registry-server" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.980200 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" containerName="marketplace-operator" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.987060 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.990514 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:34:54 crc kubenswrapper[5120]: I1208 19:34:54.993676 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lk8zb"] Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.123817 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8zrm\" (UniqueName: \"kubernetes.io/projected/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-kube-api-access-h8zrm\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.124114 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-utilities\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.124240 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-catalog-content\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.225205 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-catalog-content\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.225268 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8zrm\" (UniqueName: \"kubernetes.io/projected/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-kube-api-access-h8zrm\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.225316 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-utilities\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.225701 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-catalog-content\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.225747 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-utilities\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.249338 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8zrm\" (UniqueName: \"kubernetes.io/projected/00d1cbab-1ad5-431c-80b8-fb8795bb87d4-kube-api-access-h8zrm\") pod \"redhat-operators-lk8zb\" (UID: \"00d1cbab-1ad5-431c-80b8-fb8795bb87d4\") " pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.302359 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.667559 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e44f89-0e9d-46f9-a56b-7f01d1090930" path="/var/lib/kubelet/pods/65e44f89-0e9d-46f9-a56b-7f01d1090930/volumes" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.668770 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f020ff-36ee-4661-a02f-9fb3f5a504ac" path="/var/lib/kubelet/pods/76f020ff-36ee-4661-a02f-9fb3f5a504ac/volumes" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.669320 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dddc5e3-08fc-4488-aec8-6920e4ff05ed" path="/var/lib/kubelet/pods/7dddc5e3-08fc-4488-aec8-6920e4ff05ed/volumes" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.670677 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8646edae-915b-459b-b385-491aaf3939ec" path="/var/lib/kubelet/pods/8646edae-915b-459b-b385-491aaf3939ec/volumes" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.671376 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b216d4b8-5d23-462c-9fbc-bce0c620a83a" path="/var/lib/kubelet/pods/b216d4b8-5d23-462c-9fbc-bce0c620a83a/volumes" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.731526 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lk8zb"] Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.976553 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jt5g"] Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.984600 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:55 crc kubenswrapper[5120]: I1208 19:34:55.988190 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:55.993511 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jt5g"] Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.136789 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtnm\" (UniqueName: \"kubernetes.io/projected/6bb7244b-a293-4492-9c81-9875635963d2-kube-api-access-zrtnm\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.136874 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb7244b-a293-4492-9c81-9875635963d2-utilities\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.136917 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb7244b-a293-4492-9c81-9875635963d2-catalog-content\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.237670 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb7244b-a293-4492-9c81-9875635963d2-catalog-content\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.238049 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrtnm\" (UniqueName: \"kubernetes.io/projected/6bb7244b-a293-4492-9c81-9875635963d2-kube-api-access-zrtnm\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.238202 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb7244b-a293-4492-9c81-9875635963d2-utilities\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.238232 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb7244b-a293-4492-9c81-9875635963d2-catalog-content\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.238976 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb7244b-a293-4492-9c81-9875635963d2-utilities\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.258503 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrtnm\" (UniqueName: \"kubernetes.io/projected/6bb7244b-a293-4492-9c81-9875635963d2-kube-api-access-zrtnm\") pod \"certified-operators-8jt5g\" (UID: \"6bb7244b-a293-4492-9c81-9875635963d2\") " pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.345302 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.716823 5120 generic.go:358] "Generic (PLEG): container finished" podID="00d1cbab-1ad5-431c-80b8-fb8795bb87d4" containerID="21cfefeb91ced8e8ea1fdf1193123f663b333d521132d08234854316312d9c0d" exitCode=0 Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.716869 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lk8zb" event={"ID":"00d1cbab-1ad5-431c-80b8-fb8795bb87d4","Type":"ContainerDied","Data":"21cfefeb91ced8e8ea1fdf1193123f663b333d521132d08234854316312d9c0d"} Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.717514 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lk8zb" event={"ID":"00d1cbab-1ad5-431c-80b8-fb8795bb87d4","Type":"ContainerStarted","Data":"a34b6ee03280cd75f74ddc43542b7784197237a9a995d8231bfce90c51fde521"} Dec 08 19:34:56 crc kubenswrapper[5120]: I1208 19:34:56.766224 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jt5g"] Dec 08 19:34:56 crc kubenswrapper[5120]: W1208 19:34:56.777274 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb7244b_a293_4492_9c81_9875635963d2.slice/crio-41c62e0b5178a41e8fad8820d5b041ce15bb3992b3b59f2ca5f67b6d3b9f48f6 WatchSource:0}: Error finding container 41c62e0b5178a41e8fad8820d5b041ce15bb3992b3b59f2ca5f67b6d3b9f48f6: Status 404 returned error can't find the container with id 41c62e0b5178a41e8fad8820d5b041ce15bb3992b3b59f2ca5f67b6d3b9f48f6 Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.375970 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rpk8x"] Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.383073 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.386296 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpk8x"] Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.387467 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.554989 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1326aa4a-ba1f-4495-9567-7aba4213e832-utilities\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.555523 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1326aa4a-ba1f-4495-9567-7aba4213e832-catalog-content\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.555824 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xc8q\" (UniqueName: \"kubernetes.io/projected/1326aa4a-ba1f-4495-9567-7aba4213e832-kube-api-access-6xc8q\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.656776 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1326aa4a-ba1f-4495-9567-7aba4213e832-catalog-content\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.656843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6xc8q\" (UniqueName: \"kubernetes.io/projected/1326aa4a-ba1f-4495-9567-7aba4213e832-kube-api-access-6xc8q\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.656873 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1326aa4a-ba1f-4495-9567-7aba4213e832-utilities\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.657284 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1326aa4a-ba1f-4495-9567-7aba4213e832-utilities\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.657490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1326aa4a-ba1f-4495-9567-7aba4213e832-catalog-content\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.682285 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xc8q\" (UniqueName: \"kubernetes.io/projected/1326aa4a-ba1f-4495-9567-7aba4213e832-kube-api-access-6xc8q\") pod \"community-operators-rpk8x\" (UID: \"1326aa4a-ba1f-4495-9567-7aba4213e832\") " pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.705128 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.725342 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lk8zb" event={"ID":"00d1cbab-1ad5-431c-80b8-fb8795bb87d4","Type":"ContainerStarted","Data":"a637bf871a764cef30da5be52bd7a0581d04dbf17b4bd3e5e83b654e45262ccc"} Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.729120 5120 generic.go:358] "Generic (PLEG): container finished" podID="6bb7244b-a293-4492-9c81-9875635963d2" containerID="2335895f237d66e61e85dcbf7efc878a30e40252e6af5813a5b8a2083d4a1684" exitCode=0 Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.729283 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jt5g" event={"ID":"6bb7244b-a293-4492-9c81-9875635963d2","Type":"ContainerDied","Data":"2335895f237d66e61e85dcbf7efc878a30e40252e6af5813a5b8a2083d4a1684"} Dec 08 19:34:57 crc kubenswrapper[5120]: I1208 19:34:57.729320 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jt5g" event={"ID":"6bb7244b-a293-4492-9c81-9875635963d2","Type":"ContainerStarted","Data":"41c62e0b5178a41e8fad8820d5b041ce15bb3992b3b59f2ca5f67b6d3b9f48f6"} Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.111564 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpk8x"] Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.372287 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4gq"] Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.386803 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4gq"] Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.386955 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.389419 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.471810 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ndp\" (UniqueName: \"kubernetes.io/projected/70af827c-5e89-4676-aa30-d13e0b7a4ca5-kube-api-access-l6ndp\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.471874 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-catalog-content\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.471913 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-utilities\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.572866 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-catalog-content\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.572914 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-utilities\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.572985 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6ndp\" (UniqueName: \"kubernetes.io/projected/70af827c-5e89-4676-aa30-d13e0b7a4ca5-kube-api-access-l6ndp\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.573652 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-catalog-content\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.573802 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-utilities\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.592028 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6ndp\" (UniqueName: \"kubernetes.io/projected/70af827c-5e89-4676-aa30-d13e0b7a4ca5-kube-api-access-l6ndp\") pod \"redhat-marketplace-lq4gq\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.736685 5120 generic.go:358] "Generic (PLEG): container finished" podID="00d1cbab-1ad5-431c-80b8-fb8795bb87d4" containerID="a637bf871a764cef30da5be52bd7a0581d04dbf17b4bd3e5e83b654e45262ccc" exitCode=0 Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.736767 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lk8zb" event={"ID":"00d1cbab-1ad5-431c-80b8-fb8795bb87d4","Type":"ContainerDied","Data":"a637bf871a764cef30da5be52bd7a0581d04dbf17b4bd3e5e83b654e45262ccc"} Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.740364 5120 generic.go:358] "Generic (PLEG): container finished" podID="1326aa4a-ba1f-4495-9567-7aba4213e832" containerID="d8d0f43fb7b834b61ba9f8253eaaa44555ab6cf0dfcec733d24b2a123a7f75c2" exitCode=0 Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.741139 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpk8x" event={"ID":"1326aa4a-ba1f-4495-9567-7aba4213e832","Type":"ContainerDied","Data":"d8d0f43fb7b834b61ba9f8253eaaa44555ab6cf0dfcec733d24b2a123a7f75c2"} Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.741180 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpk8x" event={"ID":"1326aa4a-ba1f-4495-9567-7aba4213e832","Type":"ContainerStarted","Data":"283c5eb384c82622e306da115482f6814588a628c14f7b749d2bcf12bfb8999f"} Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.746595 5120 generic.go:358] "Generic (PLEG): container finished" podID="6bb7244b-a293-4492-9c81-9875635963d2" containerID="09b034ac6ae7d3603e23776e0d73bf4ac8cceeaf33be7e26a0dbc23246753109" exitCode=0 Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.746636 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jt5g" event={"ID":"6bb7244b-a293-4492-9c81-9875635963d2","Type":"ContainerDied","Data":"09b034ac6ae7d3603e23776e0d73bf4ac8cceeaf33be7e26a0dbc23246753109"} Dec 08 19:34:58 crc kubenswrapper[5120]: I1208 19:34:58.767726 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.174464 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4gq"] Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.754919 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lk8zb" event={"ID":"00d1cbab-1ad5-431c-80b8-fb8795bb87d4","Type":"ContainerStarted","Data":"b9382ce3e27cf52d0c1d402a49528be429c975001acde094a7f3894e04ef4d6a"} Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.757340 5120 generic.go:358] "Generic (PLEG): container finished" podID="1326aa4a-ba1f-4495-9567-7aba4213e832" containerID="a878105ad88d8b99d524a0d6f34e7427cc316cf237dbc18c6f9ed31470cc1ce6" exitCode=0 Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.757417 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpk8x" event={"ID":"1326aa4a-ba1f-4495-9567-7aba4213e832","Type":"ContainerDied","Data":"a878105ad88d8b99d524a0d6f34e7427cc316cf237dbc18c6f9ed31470cc1ce6"} Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.762802 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jt5g" event={"ID":"6bb7244b-a293-4492-9c81-9875635963d2","Type":"ContainerStarted","Data":"0423884f09e0754b218162d7efb6584fd51f57c832b769f2e8d0b25ee66bac9c"} Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.765348 5120 generic.go:358] "Generic (PLEG): container finished" podID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerID="e77034c4cfbdaddbb0f7d243f9159920241cef427ad3c6eed93e817fb72ee379" exitCode=0 Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.765422 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4gq" event={"ID":"70af827c-5e89-4676-aa30-d13e0b7a4ca5","Type":"ContainerDied","Data":"e77034c4cfbdaddbb0f7d243f9159920241cef427ad3c6eed93e817fb72ee379"} Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.765619 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4gq" event={"ID":"70af827c-5e89-4676-aa30-d13e0b7a4ca5","Type":"ContainerStarted","Data":"77e55d1d9c81db35f737fa8039e9f137785992817fa5678a32c9c3f9c1376722"} Dec 08 19:34:59 crc kubenswrapper[5120]: I1208 19:34:59.786144 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lk8zb" podStartSLOduration=5.114408804 podStartE2EDuration="5.786126422s" podCreationTimestamp="2025-12-08 19:34:54 +0000 UTC" firstStartedPulling="2025-12-08 19:34:56.717975104 +0000 UTC m=+349.390081763" lastFinishedPulling="2025-12-08 19:34:57.389692732 +0000 UTC m=+350.061799381" observedRunningTime="2025-12-08 19:34:59.784604714 +0000 UTC m=+352.456711373" watchObservedRunningTime="2025-12-08 19:34:59.786126422 +0000 UTC m=+352.458233071" Dec 08 19:35:00 crc kubenswrapper[5120]: I1208 19:35:00.773771 5120 generic.go:358] "Generic (PLEG): container finished" podID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerID="2f94a15b3cddd26e7b5e070684ddb21e877259396ef3f84929a90c3dcfdc9053" exitCode=0 Dec 08 19:35:00 crc kubenswrapper[5120]: I1208 19:35:00.773822 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4gq" event={"ID":"70af827c-5e89-4676-aa30-d13e0b7a4ca5","Type":"ContainerDied","Data":"2f94a15b3cddd26e7b5e070684ddb21e877259396ef3f84929a90c3dcfdc9053"} Dec 08 19:35:00 crc kubenswrapper[5120]: I1208 19:35:00.777570 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpk8x" event={"ID":"1326aa4a-ba1f-4495-9567-7aba4213e832","Type":"ContainerStarted","Data":"329eb1f77e4565656bce782da6a253477665b4bae855135ae4e6eabe65c97c23"} Dec 08 19:35:00 crc kubenswrapper[5120]: I1208 19:35:00.794414 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jt5g" podStartSLOduration=5.320098885 podStartE2EDuration="5.794396301s" podCreationTimestamp="2025-12-08 19:34:55 +0000 UTC" firstStartedPulling="2025-12-08 19:34:57.730039633 +0000 UTC m=+350.402146282" lastFinishedPulling="2025-12-08 19:34:58.204337049 +0000 UTC m=+350.876443698" observedRunningTime="2025-12-08 19:34:59.84775944 +0000 UTC m=+352.519866089" watchObservedRunningTime="2025-12-08 19:35:00.794396301 +0000 UTC m=+353.466502980" Dec 08 19:35:00 crc kubenswrapper[5120]: I1208 19:35:00.820377 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rpk8x" podStartSLOduration=3.26807991 podStartE2EDuration="3.820355922s" podCreationTimestamp="2025-12-08 19:34:57 +0000 UTC" firstStartedPulling="2025-12-08 19:34:58.741032288 +0000 UTC m=+351.413138927" lastFinishedPulling="2025-12-08 19:34:59.29330829 +0000 UTC m=+351.965414939" observedRunningTime="2025-12-08 19:35:00.815050904 +0000 UTC m=+353.487157553" watchObservedRunningTime="2025-12-08 19:35:00.820355922 +0000 UTC m=+353.492462571" Dec 08 19:35:01 crc kubenswrapper[5120]: I1208 19:35:01.788488 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4gq" event={"ID":"70af827c-5e89-4676-aa30-d13e0b7a4ca5","Type":"ContainerStarted","Data":"c42bf799f7aee589ed101c26c3174fee94e45f51aa39cc6a3920b57bf4ee2c51"} Dec 08 19:35:01 crc kubenswrapper[5120]: I1208 19:35:01.809025 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lq4gq" podStartSLOduration=3.297662612 podStartE2EDuration="3.80899452s" podCreationTimestamp="2025-12-08 19:34:58 +0000 UTC" firstStartedPulling="2025-12-08 19:34:59.765934293 +0000 UTC m=+352.438040932" lastFinishedPulling="2025-12-08 19:35:00.277266181 +0000 UTC m=+352.949372840" observedRunningTime="2025-12-08 19:35:01.808242017 +0000 UTC m=+354.480348706" watchObservedRunningTime="2025-12-08 19:35:01.80899452 +0000 UTC m=+354.481101209" Dec 08 19:35:05 crc kubenswrapper[5120]: I1208 19:35:05.303064 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:35:05 crc kubenswrapper[5120]: I1208 19:35:05.303583 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:35:05 crc kubenswrapper[5120]: I1208 19:35:05.348507 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:35:05 crc kubenswrapper[5120]: I1208 19:35:05.847708 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lk8zb" Dec 08 19:35:06 crc kubenswrapper[5120]: I1208 19:35:06.351561 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:35:06 crc kubenswrapper[5120]: I1208 19:35:06.351766 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:35:06 crc kubenswrapper[5120]: I1208 19:35:06.405465 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:35:06 crc kubenswrapper[5120]: I1208 19:35:06.846792 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jt5g" Dec 08 19:35:07 crc kubenswrapper[5120]: I1208 19:35:07.706266 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:35:07 crc kubenswrapper[5120]: I1208 19:35:07.706397 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:35:07 crc kubenswrapper[5120]: I1208 19:35:07.744544 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:35:07 crc kubenswrapper[5120]: I1208 19:35:07.875761 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rpk8x" Dec 08 19:35:08 crc kubenswrapper[5120]: I1208 19:35:08.767891 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:35:08 crc kubenswrapper[5120]: I1208 19:35:08.767944 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:35:08 crc kubenswrapper[5120]: I1208 19:35:08.806768 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:35:08 crc kubenswrapper[5120]: I1208 19:35:08.865792 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:35:53 crc kubenswrapper[5120]: I1208 19:35:53.034697 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:35:53 crc kubenswrapper[5120]: I1208 19:35:53.035358 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:36:23 crc kubenswrapper[5120]: I1208 19:36:23.035939 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:36:23 crc kubenswrapper[5120]: I1208 19:36:23.036982 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.034801 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.035407 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.035452 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.035881 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f88d551f42032419a1d26a0feb9fd83716bdd3cc03c6791e6a2e5891078890a"} pod="openshift-machine-config-operator/machine-config-daemon-5j87q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.035929 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" containerID="cri-o://3f88d551f42032419a1d26a0feb9fd83716bdd3cc03c6791e6a2e5891078890a" gracePeriod=600 Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.521813 5120 generic.go:358] "Generic (PLEG): container finished" podID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerID="3f88d551f42032419a1d26a0feb9fd83716bdd3cc03c6791e6a2e5891078890a" exitCode=0 Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.521996 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerDied","Data":"3f88d551f42032419a1d26a0feb9fd83716bdd3cc03c6791e6a2e5891078890a"} Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.522408 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"6590f4c156683ee0aa9329f55a6aa9a953f227291c1143ac0b524dd8886082c5"} Dec 08 19:36:53 crc kubenswrapper[5120]: I1208 19:36:53.522435 5120 scope.go:117] "RemoveContainer" containerID="4dd3fcd7266ba8580b4ce0c50672aa45dfb58c87ad9c83ff10b43a7f0712c6c9" Dec 08 19:38:08 crc kubenswrapper[5120]: I1208 19:38:08.050982 5120 scope.go:117] "RemoveContainer" containerID="81f69567f03f2e7f5638d92d7bbf40defd21515b8be615f6a5e1b40987159ee2" Dec 08 19:38:08 crc kubenswrapper[5120]: I1208 19:38:08.071282 5120 scope.go:117] "RemoveContainer" containerID="7c2adb36c19be4c7094db31684c4fd2f20aa98a4a06e95467dbe133ec0868723" Dec 08 19:38:08 crc kubenswrapper[5120]: I1208 19:38:08.115893 5120 scope.go:117] "RemoveContainer" containerID="acf3b035c4e09043732b1e8db6f3c0cd7e173ef035e4d77c6e2a9a005d2c389d" Dec 08 19:38:53 crc kubenswrapper[5120]: I1208 19:38:53.034900 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:38:53 crc kubenswrapper[5120]: I1208 19:38:53.036447 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:39:07 crc kubenswrapper[5120]: I1208 19:39:07.881895 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:39:07 crc kubenswrapper[5120]: I1208 19:39:07.884568 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:39:07 crc kubenswrapper[5120]: I1208 19:39:07.905234 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:39:07 crc kubenswrapper[5120]: I1208 19:39:07.906593 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:39:14 crc kubenswrapper[5120]: I1208 19:39:14.620385 5120 ???:1] "http: TLS handshake error from 192.168.126.11:53198: no serving certificate available for the kubelet" Dec 08 19:39:23 crc kubenswrapper[5120]: I1208 19:39:23.034799 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:39:23 crc kubenswrapper[5120]: I1208 19:39:23.035449 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.035711 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.037700 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.037898 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.038842 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6590f4c156683ee0aa9329f55a6aa9a953f227291c1143ac0b524dd8886082c5"} pod="openshift-machine-config-operator/machine-config-daemon-5j87q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.039118 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" containerID="cri-o://6590f4c156683ee0aa9329f55a6aa9a953f227291c1143ac0b524dd8886082c5" gracePeriod=600 Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.175127 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.637266 5120 generic.go:358] "Generic (PLEG): container finished" podID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerID="6590f4c156683ee0aa9329f55a6aa9a953f227291c1143ac0b524dd8886082c5" exitCode=0 Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.637373 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerDied","Data":"6590f4c156683ee0aa9329f55a6aa9a953f227291c1143ac0b524dd8886082c5"} Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.637912 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"0013a9518e030fb621458e94cf0445454fda4310af57dd86ad44617260dc5da5"} Dec 08 19:39:53 crc kubenswrapper[5120]: I1208 19:39:53.637938 5120 scope.go:117] "RemoveContainer" containerID="3f88d551f42032419a1d26a0feb9fd83716bdd3cc03c6791e6a2e5891078890a" Dec 08 19:40:10 crc kubenswrapper[5120]: I1208 19:40:10.819284 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4"] Dec 08 19:40:10 crc kubenswrapper[5120]: I1208 19:40:10.820120 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="kube-rbac-proxy" containerID="cri-o://2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d" gracePeriod=30 Dec 08 19:40:10 crc kubenswrapper[5120]: I1208 19:40:10.820286 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="ovnkube-cluster-manager" containerID="cri-o://e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.043910 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ccb8r"] Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.044791 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-controller" containerID="cri-o://11bd7c83e6371085562b076dd55467fec51fb89306fda486a06b01897670c376" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.045263 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="sbdb" containerID="cri-o://5512c3b26249f7e15cc03298be9df7d8f611271bb4359ace6d7a5ed93d92dc1f" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.045307 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="nbdb" containerID="cri-o://6cd57446164e31349496d7ca088a6f54f742042ed3efdf8d3ab90b04b1ef4f1d" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.045335 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="northd" containerID="cri-o://c13470e1e2767a720f35b0fa139812edf32c47ac8f2c0ae7f6dfbee33556aa0c" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.045365 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://106442a6e89d54037ddfad7a60188dbd555a3cf6df85d202834e370cd9267e06" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.045409 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-node" containerID="cri-o://e3d8e6f6742a4bf08885c876aa877b67c8a3face2be5e3a3f11180d38a19e362" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.045451 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-acl-logging" containerID="cri-o://8542f930df7975a087d84e4ac2d642e89142a6d27ad5d9da6f56b0955409f950" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.076425 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovnkube-controller" containerID="cri-o://c14dd9da6dcf0894766923d72a8566c254a78dd9a49c4d238965665edf6bef2d" gracePeriod=30 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.655374 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.690764 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj"] Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.691495 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="kube-rbac-proxy" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.691519 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="kube-rbac-proxy" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.691543 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="ovnkube-cluster-manager" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.691552 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="ovnkube-cluster-manager" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.691651 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="ovnkube-cluster-manager" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.691666 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerName="kube-rbac-proxy" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.751313 5120 generic.go:358] "Generic (PLEG): container finished" podID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerID="e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.751356 5120 generic.go:358] "Generic (PLEG): container finished" podID="39f34113-e3de-4681-aa3e-c78a89bec2bf" containerID="2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.756844 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t6dx4_0b722a01-9c2b-4e79-a301-c728aa5a90a1/kube-multus/0.log" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.756919 5120 generic.go:358] "Generic (PLEG): container finished" podID="0b722a01-9c2b-4e79-a301-c728aa5a90a1" containerID="8c51564e121d83416712137987b1424df67325c55496b115ae927d50368d542a" exitCode=2 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.768285 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ccb8r_1a06e739-3597-44df-894c-328bdbcf0af2/ovn-acl-logging/0.log" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769011 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ccb8r_1a06e739-3597-44df-894c-328bdbcf0af2/ovn-controller/0.log" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769621 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="c14dd9da6dcf0894766923d72a8566c254a78dd9a49c4d238965665edf6bef2d" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769655 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="5512c3b26249f7e15cc03298be9df7d8f611271bb4359ace6d7a5ed93d92dc1f" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769663 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="6cd57446164e31349496d7ca088a6f54f742042ed3efdf8d3ab90b04b1ef4f1d" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769671 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="c13470e1e2767a720f35b0fa139812edf32c47ac8f2c0ae7f6dfbee33556aa0c" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769678 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="106442a6e89d54037ddfad7a60188dbd555a3cf6df85d202834e370cd9267e06" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769685 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="e3d8e6f6742a4bf08885c876aa877b67c8a3face2be5e3a3f11180d38a19e362" exitCode=0 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769693 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="8542f930df7975a087d84e4ac2d642e89142a6d27ad5d9da6f56b0955409f950" exitCode=143 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.769701 5120 generic.go:358] "Generic (PLEG): container finished" podID="1a06e739-3597-44df-894c-328bdbcf0af2" containerID="11bd7c83e6371085562b076dd55467fec51fb89306fda486a06b01897670c376" exitCode=143 Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.780996 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45hg5\" (UniqueName: \"kubernetes.io/projected/39f34113-e3de-4681-aa3e-c78a89bec2bf-kube-api-access-45hg5\") pod \"39f34113-e3de-4681-aa3e-c78a89bec2bf\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.781059 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovn-control-plane-metrics-cert\") pod \"39f34113-e3de-4681-aa3e-c78a89bec2bf\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.781089 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovnkube-config\") pod \"39f34113-e3de-4681-aa3e-c78a89bec2bf\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.781150 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-env-overrides\") pod \"39f34113-e3de-4681-aa3e-c78a89bec2bf\" (UID: \"39f34113-e3de-4681-aa3e-c78a89bec2bf\") " Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.781998 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "39f34113-e3de-4681-aa3e-c78a89bec2bf" (UID: "39f34113-e3de-4681-aa3e-c78a89bec2bf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.782442 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "39f34113-e3de-4681-aa3e-c78a89bec2bf" (UID: "39f34113-e3de-4681-aa3e-c78a89bec2bf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.786858 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "39f34113-e3de-4681-aa3e-c78a89bec2bf" (UID: "39f34113-e3de-4681-aa3e-c78a89bec2bf"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.786947 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f34113-e3de-4681-aa3e-c78a89bec2bf-kube-api-access-45hg5" (OuterVolumeSpecName: "kube-api-access-45hg5") pod "39f34113-e3de-4681-aa3e-c78a89bec2bf" (UID: "39f34113-e3de-4681-aa3e-c78a89bec2bf"). InnerVolumeSpecName "kube-api-access-45hg5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" event={"ID":"39f34113-e3de-4681-aa3e-c78a89bec2bf","Type":"ContainerDied","Data":"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843149 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" event={"ID":"39f34113-e3de-4681-aa3e-c78a89bec2bf","Type":"ContainerDied","Data":"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843159 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" event={"ID":"39f34113-e3de-4681-aa3e-c78a89bec2bf","Type":"ContainerDied","Data":"499e314a660c04ddb8f2c87d4f586671f0f3470ea4971cdb1a293eaac193d7ce"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843203 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t6dx4" event={"ID":"0b722a01-9c2b-4e79-a301-c728aa5a90a1","Type":"ContainerDied","Data":"8c51564e121d83416712137987b1424df67325c55496b115ae927d50368d542a"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843215 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"c14dd9da6dcf0894766923d72a8566c254a78dd9a49c4d238965665edf6bef2d"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843232 5120 scope.go:117] "RemoveContainer" containerID="e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843326 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843355 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"5512c3b26249f7e15cc03298be9df7d8f611271bb4359ace6d7a5ed93d92dc1f"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843370 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"6cd57446164e31349496d7ca088a6f54f742042ed3efdf8d3ab90b04b1ef4f1d"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843382 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"c13470e1e2767a720f35b0fa139812edf32c47ac8f2c0ae7f6dfbee33556aa0c"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843393 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"106442a6e89d54037ddfad7a60188dbd555a3cf6df85d202834e370cd9267e06"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843414 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"e3d8e6f6742a4bf08885c876aa877b67c8a3face2be5e3a3f11180d38a19e362"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843425 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"8542f930df7975a087d84e4ac2d642e89142a6d27ad5d9da6f56b0955409f950"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843436 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"11bd7c83e6371085562b076dd55467fec51fb89306fda486a06b01897670c376"} Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.843749 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.844076 5120 scope.go:117] "RemoveContainer" containerID="8c51564e121d83416712137987b1424df67325c55496b115ae927d50368d542a" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.872498 5120 scope.go:117] "RemoveContainer" containerID="2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.882153 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-45hg5\" (UniqueName: \"kubernetes.io/projected/39f34113-e3de-4681-aa3e-c78a89bec2bf-kube-api-access-45hg5\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.882287 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.882302 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.882314 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39f34113-e3de-4681-aa3e-c78a89bec2bf-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.896206 5120 scope.go:117] "RemoveContainer" containerID="e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7" Dec 08 19:40:11 crc kubenswrapper[5120]: E1208 19:40:11.896731 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7\": container with ID starting with e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7 not found: ID does not exist" containerID="e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.896766 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7"} err="failed to get container status \"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7\": rpc error: code = NotFound desc = could not find container \"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7\": container with ID starting with e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7 not found: ID does not exist" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.896790 5120 scope.go:117] "RemoveContainer" containerID="2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d" Dec 08 19:40:11 crc kubenswrapper[5120]: E1208 19:40:11.897066 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d\": container with ID starting with 2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d not found: ID does not exist" containerID="2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.897098 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d"} err="failed to get container status \"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d\": rpc error: code = NotFound desc = could not find container \"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d\": container with ID starting with 2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d not found: ID does not exist" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.897113 5120 scope.go:117] "RemoveContainer" containerID="e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.897390 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7"} err="failed to get container status \"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7\": rpc error: code = NotFound desc = could not find container \"e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7\": container with ID starting with e865d9839f57c8c2918ee3c2545ed4bfa419a3b53b6a33b7ae09a65cf678a6c7 not found: ID does not exist" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.897416 5120 scope.go:117] "RemoveContainer" containerID="2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.897663 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d"} err="failed to get container status \"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d\": rpc error: code = NotFound desc = could not find container \"2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d\": container with ID starting with 2a3245ec49cbe852cd7f52d1d713762c81b27220c373dcc7a27783326752774d not found: ID does not exist" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.898813 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4"] Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.907040 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ftpb4"] Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.983087 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78c6c960-ff44-4aa6-b710-97753f1e20e9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.983959 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bfjx\" (UniqueName: \"kubernetes.io/projected/78c6c960-ff44-4aa6-b710-97753f1e20e9-kube-api-access-7bfjx\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.984211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78c6c960-ff44-4aa6-b710-97753f1e20e9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.984249 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78c6c960-ff44-4aa6-b710-97753f1e20e9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.991844 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ccb8r_1a06e739-3597-44df-894c-328bdbcf0af2/ovn-acl-logging/0.log" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.992443 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ccb8r_1a06e739-3597-44df-894c-328bdbcf0af2/ovn-controller/0.log" Dec 08 19:40:11 crc kubenswrapper[5120]: I1208 19:40:11.992912 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.046867 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jqzd"] Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047514 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-controller" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047538 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-controller" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047549 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-node" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047557 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-node" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047570 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="nbdb" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047577 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="nbdb" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047585 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047590 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047597 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="sbdb" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047604 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="sbdb" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047612 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="northd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047618 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="northd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047627 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kubecfg-setup" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047632 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kubecfg-setup" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047639 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-acl-logging" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047645 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-acl-logging" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047656 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovnkube-controller" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047661 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovnkube-controller" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047741 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="sbdb" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047748 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-controller" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047759 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="northd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047767 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovn-acl-logging" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047773 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="nbdb" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047780 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047789 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="ovnkube-controller" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.047799 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" containerName="kube-rbac-proxy-node" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.053425 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084719 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084811 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-script-lib\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084831 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-var-lib-openvswitch\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084853 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-log-socket\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084885 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-systemd-units\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084916 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-node-log\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084936 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-bin\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.084983 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-env-overrides\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085025 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-systemd\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-slash\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085077 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a06e739-3597-44df-894c-328bdbcf0af2-ovn-node-metrics-cert\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085097 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-netd\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085141 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-etc-openvswitch\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085184 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-kubelet\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085221 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7l9v\" (UniqueName: \"kubernetes.io/projected/1a06e739-3597-44df-894c-328bdbcf0af2-kube-api-access-b7l9v\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085272 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-netns\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085312 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-ovn\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085335 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-config\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085355 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-openvswitch\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085394 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-ovn-kubernetes\") pod \"1a06e739-3597-44df-894c-328bdbcf0af2\" (UID: \"1a06e739-3597-44df-894c-328bdbcf0af2\") " Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085500 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bfjx\" (UniqueName: \"kubernetes.io/projected/78c6c960-ff44-4aa6-b710-97753f1e20e9-kube-api-access-7bfjx\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78c6c960-ff44-4aa6-b710-97753f1e20e9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085584 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78c6c960-ff44-4aa6-b710-97753f1e20e9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.085713 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78c6c960-ff44-4aa6-b710-97753f1e20e9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086225 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086290 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086872 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086929 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086937 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086935 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-log-socket" (OuterVolumeSpecName: "log-socket") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086963 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.086976 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-node-log" (OuterVolumeSpecName: "node-log") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087001 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087029 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087097 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-slash" (OuterVolumeSpecName: "host-slash") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087254 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087262 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087289 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087314 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087348 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.087627 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.088025 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78c6c960-ff44-4aa6-b710-97753f1e20e9-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.088460 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78c6c960-ff44-4aa6-b710-97753f1e20e9-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.103451 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78c6c960-ff44-4aa6-b710-97753f1e20e9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.103684 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a06e739-3597-44df-894c-328bdbcf0af2-kube-api-access-b7l9v" (OuterVolumeSpecName: "kube-api-access-b7l9v") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "kube-api-access-b7l9v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.103705 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a06e739-3597-44df-894c-328bdbcf0af2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.113025 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bfjx\" (UniqueName: \"kubernetes.io/projected/78c6c960-ff44-4aa6-b710-97753f1e20e9-kube-api-access-7bfjx\") pod \"ovnkube-control-plane-97c9b6c48-2sfdj\" (UID: \"78c6c960-ff44-4aa6-b710-97753f1e20e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.113951 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1a06e739-3597-44df-894c-328bdbcf0af2" (UID: "1a06e739-3597-44df-894c-328bdbcf0af2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.161085 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" Dec 08 19:40:12 crc kubenswrapper[5120]: W1208 19:40:12.178437 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78c6c960_ff44_4aa6_b710_97753f1e20e9.slice/crio-1a85ea25cb24ab74de48f9bb504518cd4b226a9037bd6157224a337c7b7506f7 WatchSource:0}: Error finding container 1a85ea25cb24ab74de48f9bb504518cd4b226a9037bd6157224a337c7b7506f7: Status 404 returned error can't find the container with id 1a85ea25cb24ab74de48f9bb504518cd4b226a9037bd6157224a337c7b7506f7 Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187352 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2bm4\" (UniqueName: \"kubernetes.io/projected/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-kube-api-access-n2bm4\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187396 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-cni-netd\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187413 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-run-netns\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187430 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-cni-bin\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187446 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovnkube-config\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187535 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-systemd\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187616 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187651 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-systemd-units\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187690 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-kubelet\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187721 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovn-node-metrics-cert\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187750 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-log-socket\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187769 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-env-overrides\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187799 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187837 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-var-lib-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187880 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-node-log\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187916 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-ovn\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187937 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.187987 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovnkube-script-lib\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188062 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-etc-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188090 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-slash\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188185 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a06e739-3597-44df-894c-328bdbcf0af2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188200 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188213 5120 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188225 5120 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188236 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b7l9v\" (UniqueName: \"kubernetes.io/projected/1a06e739-3597-44df-894c-328bdbcf0af2-kube-api-access-b7l9v\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188248 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188258 5120 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188269 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188279 5120 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188292 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188319 5120 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188332 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188344 5120 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188355 5120 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188367 5120 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188378 5120 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188389 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188400 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a06e739-3597-44df-894c-328bdbcf0af2-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188411 5120 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.188423 5120 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a06e739-3597-44df-894c-328bdbcf0af2-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289294 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289357 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289394 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-systemd-units\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289418 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-kubelet\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289441 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovn-node-metrics-cert\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289463 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-log-socket\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289483 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-env-overrides\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289533 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-var-lib-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289561 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-node-log\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289588 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-ovn\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289609 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289643 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovnkube-script-lib\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289687 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-etc-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289710 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-slash\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289735 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2bm4\" (UniqueName: \"kubernetes.io/projected/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-kube-api-access-n2bm4\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289770 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-cni-netd\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289796 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-run-netns\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289817 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-cni-bin\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289839 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovnkube-config\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289881 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-systemd\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289966 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-systemd\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.289999 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-systemd-units\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290028 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-kubelet\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290844 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-cni-netd\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290901 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-slash\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290949 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-etc-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290971 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-run-netns\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290993 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290954 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-host-cni-bin\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.291028 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.291036 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-node-log\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.291055 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-log-socket\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.290999 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-var-lib-openvswitch\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.291111 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-run-ovn\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.291324 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovnkube-script-lib\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.291511 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-env-overrides\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.291696 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovnkube-config\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.297367 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-ovn-node-metrics-cert\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.309355 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2bm4\" (UniqueName: \"kubernetes.io/projected/4ac9436d-0fd6-4e4c-8dc5-b43621d21a40-kube-api-access-n2bm4\") pod \"ovnkube-node-8jqzd\" (UID: \"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.366011 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.781861 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t6dx4_0b722a01-9c2b-4e79-a301-c728aa5a90a1/kube-multus/0.log" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.781970 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t6dx4" event={"ID":"0b722a01-9c2b-4e79-a301-c728aa5a90a1","Type":"ContainerStarted","Data":"5555448d6f4c2e31947b26a686df2f65ec47dd55eb141af39a388d5ab67ad6ab"} Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.788332 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ccb8r_1a06e739-3597-44df-894c-328bdbcf0af2/ovn-acl-logging/0.log" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.788905 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ccb8r_1a06e739-3597-44df-894c-328bdbcf0af2/ovn-controller/0.log" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.789578 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" event={"ID":"1a06e739-3597-44df-894c-328bdbcf0af2","Type":"ContainerDied","Data":"a40383eaee87590ffd7736225b0e7f83f732c978107c5357d75573b319d9f93a"} Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.789668 5120 scope.go:117] "RemoveContainer" containerID="c14dd9da6dcf0894766923d72a8566c254a78dd9a49c4d238965665edf6bef2d" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.789849 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ccb8r" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.791607 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"a4b8808c9f0fc1f1fdcf03e1b4c1f949acad7840fdd81b8b0a479a0a7c402d21"} Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.792957 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" event={"ID":"78c6c960-ff44-4aa6-b710-97753f1e20e9","Type":"ContainerStarted","Data":"01d568046b2d13257ab2430e0db33661f2723196f414a4456ef74f7206e445c9"} Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.792993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" event={"ID":"78c6c960-ff44-4aa6-b710-97753f1e20e9","Type":"ContainerStarted","Data":"1a85ea25cb24ab74de48f9bb504518cd4b226a9037bd6157224a337c7b7506f7"} Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.808371 5120 scope.go:117] "RemoveContainer" containerID="5512c3b26249f7e15cc03298be9df7d8f611271bb4359ace6d7a5ed93d92dc1f" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.827621 5120 scope.go:117] "RemoveContainer" containerID="6cd57446164e31349496d7ca088a6f54f742042ed3efdf8d3ab90b04b1ef4f1d" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.836308 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ccb8r"] Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.842833 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ccb8r"] Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.851895 5120 scope.go:117] "RemoveContainer" containerID="c13470e1e2767a720f35b0fa139812edf32c47ac8f2c0ae7f6dfbee33556aa0c" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.866622 5120 scope.go:117] "RemoveContainer" containerID="106442a6e89d54037ddfad7a60188dbd555a3cf6df85d202834e370cd9267e06" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.882080 5120 scope.go:117] "RemoveContainer" containerID="e3d8e6f6742a4bf08885c876aa877b67c8a3face2be5e3a3f11180d38a19e362" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.903576 5120 scope.go:117] "RemoveContainer" containerID="8542f930df7975a087d84e4ac2d642e89142a6d27ad5d9da6f56b0955409f950" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.922753 5120 scope.go:117] "RemoveContainer" containerID="11bd7c83e6371085562b076dd55467fec51fb89306fda486a06b01897670c376" Dec 08 19:40:12 crc kubenswrapper[5120]: I1208 19:40:12.940161 5120 scope.go:117] "RemoveContainer" containerID="bfde00eb4e4fc404eccbaae8d1c26ac0555c85950181aeba3e83f5a1805c0ab0" Dec 08 19:40:13 crc kubenswrapper[5120]: I1208 19:40:13.667822 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a06e739-3597-44df-894c-328bdbcf0af2" path="/var/lib/kubelet/pods/1a06e739-3597-44df-894c-328bdbcf0af2/volumes" Dec 08 19:40:13 crc kubenswrapper[5120]: I1208 19:40:13.668973 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f34113-e3de-4681-aa3e-c78a89bec2bf" path="/var/lib/kubelet/pods/39f34113-e3de-4681-aa3e-c78a89bec2bf/volumes" Dec 08 19:40:13 crc kubenswrapper[5120]: I1208 19:40:13.802783 5120 generic.go:358] "Generic (PLEG): container finished" podID="4ac9436d-0fd6-4e4c-8dc5-b43621d21a40" containerID="75758e273ff19d142854c780bf48afb9226a7a7ccda75e51c2abebb1f3e04c9b" exitCode=0 Dec 08 19:40:13 crc kubenswrapper[5120]: I1208 19:40:13.802938 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerDied","Data":"75758e273ff19d142854c780bf48afb9226a7a7ccda75e51c2abebb1f3e04c9b"} Dec 08 19:40:13 crc kubenswrapper[5120]: I1208 19:40:13.806157 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" event={"ID":"78c6c960-ff44-4aa6-b710-97753f1e20e9","Type":"ContainerStarted","Data":"c69202432ea99678d2d924f732fbb61e499e06f5c14bc5052b48e1a0b0a4be7e"} Dec 08 19:40:13 crc kubenswrapper[5120]: I1208 19:40:13.870085 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2sfdj" podStartSLOduration=3.870062614 podStartE2EDuration="3.870062614s" podCreationTimestamp="2025-12-08 19:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:40:13.866648107 +0000 UTC m=+666.538754776" watchObservedRunningTime="2025-12-08 19:40:13.870062614 +0000 UTC m=+666.542169263" Dec 08 19:40:14 crc kubenswrapper[5120]: I1208 19:40:14.819993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"8f7a84259aaa80462483da6c9f9512cfb244f5bc04764f835492c42080bff4ab"} Dec 08 19:40:14 crc kubenswrapper[5120]: I1208 19:40:14.820318 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"d2b45a88db8c5cc6c6528ad55adf98467927c41487fa85c553037fd84c0ee8e7"} Dec 08 19:40:14 crc kubenswrapper[5120]: I1208 19:40:14.820335 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"a0e793f437f60c3698ea9112cc88de6d134005790afe3c7583304753afd8fd8c"} Dec 08 19:40:14 crc kubenswrapper[5120]: I1208 19:40:14.820346 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"6880c53446c9a60243059fa6e502af293be821788f22f9617d0fd6e019020801"} Dec 08 19:40:15 crc kubenswrapper[5120]: I1208 19:40:15.826627 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"e9f54a6a9b2740353dd95e77730942765cd104cfca862b3493c33d869b87a434"} Dec 08 19:40:15 crc kubenswrapper[5120]: I1208 19:40:15.826928 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"3a2bf52143dada1f9b5eb125fffc339f175a126ec6ab08f3aff18b2a01a7b9e3"} Dec 08 19:40:17 crc kubenswrapper[5120]: I1208 19:40:17.842217 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"12576c710b8a5fa2d306d6034661a04ccce13df6b5ad83245d811fcb11a88ca7"} Dec 08 19:40:21 crc kubenswrapper[5120]: I1208 19:40:21.866591 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" event={"ID":"4ac9436d-0fd6-4e4c-8dc5-b43621d21a40","Type":"ContainerStarted","Data":"50ee903c1a7e8cdb65bf23d0c0b5efa53b29e59f3a12dfcab34f167eeecec252"} Dec 08 19:40:21 crc kubenswrapper[5120]: I1208 19:40:21.867122 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:21 crc kubenswrapper[5120]: I1208 19:40:21.867142 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:21 crc kubenswrapper[5120]: I1208 19:40:21.867153 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:21 crc kubenswrapper[5120]: I1208 19:40:21.902481 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" podStartSLOduration=9.902456539 podStartE2EDuration="9.902456539s" podCreationTimestamp="2025-12-08 19:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:40:21.89740006 +0000 UTC m=+674.569506719" watchObservedRunningTime="2025-12-08 19:40:21.902456539 +0000 UTC m=+674.574563218" Dec 08 19:40:21 crc kubenswrapper[5120]: I1208 19:40:21.907816 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:21 crc kubenswrapper[5120]: I1208 19:40:21.910700 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:40:53 crc kubenswrapper[5120]: I1208 19:40:53.912697 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jqzd" Dec 08 19:41:15 crc kubenswrapper[5120]: I1208 19:41:15.816884 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xk2z2"] Dec 08 19:41:15 crc kubenswrapper[5120]: I1208 19:41:15.832535 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xk2z2"] Dec 08 19:41:15 crc kubenswrapper[5120]: I1208 19:41:15.832722 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:15 crc kubenswrapper[5120]: I1208 19:41:15.941294 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5n65\" (UniqueName: \"kubernetes.io/projected/3bf3024f-4e85-4ce7-9c87-2216a556903c-kube-api-access-t5n65\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:15 crc kubenswrapper[5120]: I1208 19:41:15.941363 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-utilities\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:15 crc kubenswrapper[5120]: I1208 19:41:15.941391 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-catalog-content\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.042723 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t5n65\" (UniqueName: \"kubernetes.io/projected/3bf3024f-4e85-4ce7-9c87-2216a556903c-kube-api-access-t5n65\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.042809 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-utilities\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.042843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-catalog-content\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.043446 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-catalog-content\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.043526 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-utilities\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.063241 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5n65\" (UniqueName: \"kubernetes.io/projected/3bf3024f-4e85-4ce7-9c87-2216a556903c-kube-api-access-t5n65\") pod \"redhat-operators-xk2z2\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.159661 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:16 crc kubenswrapper[5120]: I1208 19:41:16.398301 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xk2z2"] Dec 08 19:41:17 crc kubenswrapper[5120]: I1208 19:41:17.177850 5120 generic.go:358] "Generic (PLEG): container finished" podID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerID="5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e" exitCode=0 Dec 08 19:41:17 crc kubenswrapper[5120]: I1208 19:41:17.177955 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk2z2" event={"ID":"3bf3024f-4e85-4ce7-9c87-2216a556903c","Type":"ContainerDied","Data":"5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e"} Dec 08 19:41:17 crc kubenswrapper[5120]: I1208 19:41:17.179397 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk2z2" event={"ID":"3bf3024f-4e85-4ce7-9c87-2216a556903c","Type":"ContainerStarted","Data":"1c43c2211aff67d87f75ad384dc773050c2295bdc97166201ac0d1519b495b2b"} Dec 08 19:41:18 crc kubenswrapper[5120]: I1208 19:41:18.187289 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk2z2" event={"ID":"3bf3024f-4e85-4ce7-9c87-2216a556903c","Type":"ContainerStarted","Data":"b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9"} Dec 08 19:41:19 crc kubenswrapper[5120]: I1208 19:41:19.194687 5120 generic.go:358] "Generic (PLEG): container finished" podID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerID="b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9" exitCode=0 Dec 08 19:41:19 crc kubenswrapper[5120]: I1208 19:41:19.194837 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk2z2" event={"ID":"3bf3024f-4e85-4ce7-9c87-2216a556903c","Type":"ContainerDied","Data":"b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9"} Dec 08 19:41:20 crc kubenswrapper[5120]: I1208 19:41:20.203415 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk2z2" event={"ID":"3bf3024f-4e85-4ce7-9c87-2216a556903c","Type":"ContainerStarted","Data":"01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26"} Dec 08 19:41:20 crc kubenswrapper[5120]: I1208 19:41:20.218847 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xk2z2" podStartSLOduration=4.6415891479999996 podStartE2EDuration="5.218834828s" podCreationTimestamp="2025-12-08 19:41:15 +0000 UTC" firstStartedPulling="2025-12-08 19:41:17.178988656 +0000 UTC m=+729.851095305" lastFinishedPulling="2025-12-08 19:41:17.756234336 +0000 UTC m=+730.428340985" observedRunningTime="2025-12-08 19:41:20.217784715 +0000 UTC m=+732.889891374" watchObservedRunningTime="2025-12-08 19:41:20.218834828 +0000 UTC m=+732.890941477" Dec 08 19:41:24 crc kubenswrapper[5120]: I1208 19:41:24.372772 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4gq"] Dec 08 19:41:24 crc kubenswrapper[5120]: I1208 19:41:24.373522 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lq4gq" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="registry-server" containerID="cri-o://c42bf799f7aee589ed101c26c3174fee94e45f51aa39cc6a3920b57bf4ee2c51" gracePeriod=30 Dec 08 19:41:25 crc kubenswrapper[5120]: I1208 19:41:25.530292 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-nmdp7"] Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.138715 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.144493 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.144693 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-nmdp7"] Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.144712 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.144808 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.194669 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.251820 5120 generic.go:358] "Generic (PLEG): container finished" podID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerID="c42bf799f7aee589ed101c26c3174fee94e45f51aa39cc6a3920b57bf4ee2c51" exitCode=0 Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.252723 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4gq" event={"ID":"70af827c-5e89-4676-aa30-d13e0b7a4ca5","Type":"ContainerDied","Data":"c42bf799f7aee589ed101c26c3174fee94e45f51aa39cc6a3920b57bf4ee2c51"} Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292428 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292473 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-bound-sa-token\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292493 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh84t\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-kube-api-access-mh84t\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292510 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-trusted-ca\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292532 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-registry-certificates\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292566 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292600 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-registry-tls\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.292623 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.325375 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.393687 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-registry-certificates\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.393773 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-registry-tls\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.393796 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.394015 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.394034 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-bound-sa-token\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.394054 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mh84t\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-kube-api-access-mh84t\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.394072 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-trusted-ca\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.394751 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.395488 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-trusted-ca\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.396531 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-registry-certificates\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.401600 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.402230 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-registry-tls\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.414480 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz"] Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.419049 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh84t\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-kube-api-access-mh84t\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.438205 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aee9a8a4-d0d0-40b9-80e6-538349bf2c81-bound-sa-token\") pod \"image-registry-5d9d95bf5b-nmdp7\" (UID: \"aee9a8a4-d0d0-40b9-80e6-538349bf2c81\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.476205 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.496371 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.595832 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6ndp\" (UniqueName: \"kubernetes.io/projected/70af827c-5e89-4676-aa30-d13e0b7a4ca5-kube-api-access-l6ndp\") pod \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.596129 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-catalog-content\") pod \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.603919 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70af827c-5e89-4676-aa30-d13e0b7a4ca5-kube-api-access-l6ndp" (OuterVolumeSpecName: "kube-api-access-l6ndp") pod "70af827c-5e89-4676-aa30-d13e0b7a4ca5" (UID: "70af827c-5e89-4676-aa30-d13e0b7a4ca5"). InnerVolumeSpecName "kube-api-access-l6ndp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.605121 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70af827c-5e89-4676-aa30-d13e0b7a4ca5" (UID: "70af827c-5e89-4676-aa30-d13e0b7a4ca5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.607421 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-utilities\") pod \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\" (UID: \"70af827c-5e89-4676-aa30-d13e0b7a4ca5\") " Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.607854 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l6ndp\" (UniqueName: \"kubernetes.io/projected/70af827c-5e89-4676-aa30-d13e0b7a4ca5-kube-api-access-l6ndp\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.607870 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.608812 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-utilities" (OuterVolumeSpecName: "utilities") pod "70af827c-5e89-4676-aa30-d13e0b7a4ca5" (UID: "70af827c-5e89-4676-aa30-d13e0b7a4ca5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.632493 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.633329 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz"] Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.634455 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.660908 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-nmdp7"] Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.709368 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.709440 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.709498 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjbv\" (UniqueName: \"kubernetes.io/projected/83329e78-ec43-4185-ac94-c14b4718c077-kube-api-access-kjjbv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.709636 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70af827c-5e89-4676-aa30-d13e0b7a4ca5-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.810521 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.810587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.810618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kjjbv\" (UniqueName: \"kubernetes.io/projected/83329e78-ec43-4185-ac94-c14b4718c077-kube-api-access-kjjbv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.810980 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.811209 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.829780 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjjbv\" (UniqueName: \"kubernetes.io/projected/83329e78-ec43-4185-ac94-c14b4718c077-kube-api-access-kjjbv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:28 crc kubenswrapper[5120]: I1208 19:41:28.954401 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.137072 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz"] Dec 08 19:41:29 crc kubenswrapper[5120]: W1208 19:41:29.142195 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83329e78_ec43_4185_ac94_c14b4718c077.slice/crio-76a78bf87f135d436ab29e6a08edf82035e1d5923f74b81bc13c46b27e861bfa WatchSource:0}: Error finding container 76a78bf87f135d436ab29e6a08edf82035e1d5923f74b81bc13c46b27e861bfa: Status 404 returned error can't find the container with id 76a78bf87f135d436ab29e6a08edf82035e1d5923f74b81bc13c46b27e861bfa Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.259586 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" event={"ID":"aee9a8a4-d0d0-40b9-80e6-538349bf2c81","Type":"ContainerStarted","Data":"5742ab82ac30fa2f80152ecad84cc7bf05ae0dde98667c8af2aa8f8b713d7a2c"} Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.259658 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" event={"ID":"aee9a8a4-d0d0-40b9-80e6-538349bf2c81","Type":"ContainerStarted","Data":"143802e3a09a5660cc84b364da5b29274f112ef2f8fab02c017743cd65eb4904"} Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.259760 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.264643 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4gq" event={"ID":"70af827c-5e89-4676-aa30-d13e0b7a4ca5","Type":"ContainerDied","Data":"77e55d1d9c81db35f737fa8039e9f137785992817fa5678a32c9c3f9c1376722"} Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.264736 5120 scope.go:117] "RemoveContainer" containerID="c42bf799f7aee589ed101c26c3174fee94e45f51aa39cc6a3920b57bf4ee2c51" Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.264950 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4gq" Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.271344 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" event={"ID":"83329e78-ec43-4185-ac94-c14b4718c077","Type":"ContainerStarted","Data":"76a78bf87f135d436ab29e6a08edf82035e1d5923f74b81bc13c46b27e861bfa"} Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.283102 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" podStartSLOduration=4.283078078 podStartE2EDuration="4.283078078s" podCreationTimestamp="2025-12-08 19:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:41:29.276752019 +0000 UTC m=+741.948858688" watchObservedRunningTime="2025-12-08 19:41:29.283078078 +0000 UTC m=+741.955184727" Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.284472 5120 scope.go:117] "RemoveContainer" containerID="2f94a15b3cddd26e7b5e070684ddb21e877259396ef3f84929a90c3dcfdc9053" Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.304475 5120 scope.go:117] "RemoveContainer" containerID="e77034c4cfbdaddbb0f7d243f9159920241cef427ad3c6eed93e817fb72ee379" Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.306459 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4gq"] Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.310731 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4gq"] Dec 08 19:41:29 crc kubenswrapper[5120]: I1208 19:41:29.671516 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" path="/var/lib/kubelet/pods/70af827c-5e89-4676-aa30-d13e0b7a4ca5/volumes" Dec 08 19:41:30 crc kubenswrapper[5120]: I1208 19:41:30.279589 5120 generic.go:358] "Generic (PLEG): container finished" podID="83329e78-ec43-4185-ac94-c14b4718c077" containerID="53c2981cb97597b630d3f4985d4c89dc5d7c1f0201ba93f0b04ca289dc5cdb84" exitCode=0 Dec 08 19:41:30 crc kubenswrapper[5120]: I1208 19:41:30.280454 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" event={"ID":"83329e78-ec43-4185-ac94-c14b4718c077","Type":"ContainerDied","Data":"53c2981cb97597b630d3f4985d4c89dc5d7c1f0201ba93f0b04ca289dc5cdb84"} Dec 08 19:41:31 crc kubenswrapper[5120]: I1208 19:41:31.137433 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xk2z2"] Dec 08 19:41:31 crc kubenswrapper[5120]: I1208 19:41:31.137764 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xk2z2" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="registry-server" containerID="cri-o://01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26" gracePeriod=2 Dec 08 19:41:31 crc kubenswrapper[5120]: I1208 19:41:31.997762 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.061416 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-utilities\") pod \"3bf3024f-4e85-4ce7-9c87-2216a556903c\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.061491 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5n65\" (UniqueName: \"kubernetes.io/projected/3bf3024f-4e85-4ce7-9c87-2216a556903c-kube-api-access-t5n65\") pod \"3bf3024f-4e85-4ce7-9c87-2216a556903c\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.061541 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-catalog-content\") pod \"3bf3024f-4e85-4ce7-9c87-2216a556903c\" (UID: \"3bf3024f-4e85-4ce7-9c87-2216a556903c\") " Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.062617 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-utilities" (OuterVolumeSpecName: "utilities") pod "3bf3024f-4e85-4ce7-9c87-2216a556903c" (UID: "3bf3024f-4e85-4ce7-9c87-2216a556903c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.069721 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf3024f-4e85-4ce7-9c87-2216a556903c-kube-api-access-t5n65" (OuterVolumeSpecName: "kube-api-access-t5n65") pod "3bf3024f-4e85-4ce7-9c87-2216a556903c" (UID: "3bf3024f-4e85-4ce7-9c87-2216a556903c"). InnerVolumeSpecName "kube-api-access-t5n65". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.163469 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.163512 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5n65\" (UniqueName: \"kubernetes.io/projected/3bf3024f-4e85-4ce7-9c87-2216a556903c-kube-api-access-t5n65\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.176640 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf3024f-4e85-4ce7-9c87-2216a556903c" (UID: "3bf3024f-4e85-4ce7-9c87-2216a556903c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.264306 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3024f-4e85-4ce7-9c87-2216a556903c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.292559 5120 generic.go:358] "Generic (PLEG): container finished" podID="83329e78-ec43-4185-ac94-c14b4718c077" containerID="d88d5becd09f4b0bd5fe60c3e6d5782510485615b440c87c967108163ce8575e" exitCode=0 Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.292710 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" event={"ID":"83329e78-ec43-4185-ac94-c14b4718c077","Type":"ContainerDied","Data":"d88d5becd09f4b0bd5fe60c3e6d5782510485615b440c87c967108163ce8575e"} Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.295410 5120 generic.go:358] "Generic (PLEG): container finished" podID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerID="01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26" exitCode=0 Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.295505 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xk2z2" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.295502 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk2z2" event={"ID":"3bf3024f-4e85-4ce7-9c87-2216a556903c","Type":"ContainerDied","Data":"01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26"} Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.295942 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xk2z2" event={"ID":"3bf3024f-4e85-4ce7-9c87-2216a556903c","Type":"ContainerDied","Data":"1c43c2211aff67d87f75ad384dc773050c2295bdc97166201ac0d1519b495b2b"} Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.295987 5120 scope.go:117] "RemoveContainer" containerID="01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.327393 5120 scope.go:117] "RemoveContainer" containerID="b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.348058 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xk2z2"] Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.356292 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xk2z2"] Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.364719 5120 scope.go:117] "RemoveContainer" containerID="5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.382986 5120 scope.go:117] "RemoveContainer" containerID="01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26" Dec 08 19:41:32 crc kubenswrapper[5120]: E1208 19:41:32.383466 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26\": container with ID starting with 01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26 not found: ID does not exist" containerID="01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.383526 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26"} err="failed to get container status \"01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26\": rpc error: code = NotFound desc = could not find container \"01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26\": container with ID starting with 01998bc37a7a335f30b8affec78673c311727d8887ed89adbb3fba655a1f8d26 not found: ID does not exist" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.383563 5120 scope.go:117] "RemoveContainer" containerID="b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9" Dec 08 19:41:32 crc kubenswrapper[5120]: E1208 19:41:32.383871 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9\": container with ID starting with b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9 not found: ID does not exist" containerID="b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.383902 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9"} err="failed to get container status \"b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9\": rpc error: code = NotFound desc = could not find container \"b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9\": container with ID starting with b62e2070d6e6186268ba3dcf33b59014f3deac895e6c8470270c2fbcb4aaa1f9 not found: ID does not exist" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.383922 5120 scope.go:117] "RemoveContainer" containerID="5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e" Dec 08 19:41:32 crc kubenswrapper[5120]: E1208 19:41:32.384286 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e\": container with ID starting with 5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e not found: ID does not exist" containerID="5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e" Dec 08 19:41:32 crc kubenswrapper[5120]: I1208 19:41:32.384324 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e"} err="failed to get container status \"5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e\": rpc error: code = NotFound desc = could not find container \"5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e\": container with ID starting with 5e7efa9b58aa340e5f98818582662e33e9552a47cee5bdd4ab44b4f73849b73e not found: ID does not exist" Dec 08 19:41:33 crc kubenswrapper[5120]: I1208 19:41:33.303660 5120 generic.go:358] "Generic (PLEG): container finished" podID="83329e78-ec43-4185-ac94-c14b4718c077" containerID="1ab3326c026d1d9f539fa1eb82f5da773320b04c4e8702ea0ec61958567d9d44" exitCode=0 Dec 08 19:41:33 crc kubenswrapper[5120]: I1208 19:41:33.303750 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" event={"ID":"83329e78-ec43-4185-ac94-c14b4718c077","Type":"ContainerDied","Data":"1ab3326c026d1d9f539fa1eb82f5da773320b04c4e8702ea0ec61958567d9d44"} Dec 08 19:41:33 crc kubenswrapper[5120]: I1208 19:41:33.667758 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" path="/var/lib/kubelet/pods/3bf3024f-4e85-4ce7-9c87-2216a556903c/volumes" Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.530352 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.594510 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-util\") pod \"83329e78-ec43-4185-ac94-c14b4718c077\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.594882 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-bundle\") pod \"83329e78-ec43-4185-ac94-c14b4718c077\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.594992 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjjbv\" (UniqueName: \"kubernetes.io/projected/83329e78-ec43-4185-ac94-c14b4718c077-kube-api-access-kjjbv\") pod \"83329e78-ec43-4185-ac94-c14b4718c077\" (UID: \"83329e78-ec43-4185-ac94-c14b4718c077\") " Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.598045 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-bundle" (OuterVolumeSpecName: "bundle") pod "83329e78-ec43-4185-ac94-c14b4718c077" (UID: "83329e78-ec43-4185-ac94-c14b4718c077"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.605440 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83329e78-ec43-4185-ac94-c14b4718c077-kube-api-access-kjjbv" (OuterVolumeSpecName: "kube-api-access-kjjbv") pod "83329e78-ec43-4185-ac94-c14b4718c077" (UID: "83329e78-ec43-4185-ac94-c14b4718c077"). InnerVolumeSpecName "kube-api-access-kjjbv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.697504 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.697563 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kjjbv\" (UniqueName: \"kubernetes.io/projected/83329e78-ec43-4185-ac94-c14b4718c077-kube-api-access-kjjbv\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.854149 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-util" (OuterVolumeSpecName: "util") pod "83329e78-ec43-4185-ac94-c14b4718c077" (UID: "83329e78-ec43-4185-ac94-c14b4718c077"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:34 crc kubenswrapper[5120]: I1208 19:41:34.900266 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83329e78-ec43-4185-ac94-c14b4718c077-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:35 crc kubenswrapper[5120]: I1208 19:41:35.316901 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" event={"ID":"83329e78-ec43-4185-ac94-c14b4718c077","Type":"ContainerDied","Data":"76a78bf87f135d436ab29e6a08edf82035e1d5923f74b81bc13c46b27e861bfa"} Dec 08 19:41:35 crc kubenswrapper[5120]: I1208 19:41:35.316940 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a78bf87f135d436ab29e6a08edf82035e1d5923f74b81bc13c46b27e861bfa" Dec 08 19:41:35 crc kubenswrapper[5120]: I1208 19:41:35.316953 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gt2fz" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.809453 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b"] Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810776 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83329e78-ec43-4185-ac94-c14b4718c077" containerName="pull" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810804 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83329e78-ec43-4185-ac94-c14b4718c077" containerName="pull" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810822 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="extract-utilities" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810834 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="extract-utilities" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810855 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="registry-server" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810865 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="registry-server" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810879 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83329e78-ec43-4185-ac94-c14b4718c077" containerName="extract" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810888 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83329e78-ec43-4185-ac94-c14b4718c077" containerName="extract" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810923 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="extract-utilities" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810934 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="extract-utilities" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810952 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83329e78-ec43-4185-ac94-c14b4718c077" containerName="util" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810963 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83329e78-ec43-4185-ac94-c14b4718c077" containerName="util" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810980 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="extract-content" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.810989 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="extract-content" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.811001 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="registry-server" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.811010 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="registry-server" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.811027 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="extract-content" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.811038 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="extract-content" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.811185 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3bf3024f-4e85-4ce7-9c87-2216a556903c" containerName="registry-server" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.811209 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="83329e78-ec43-4185-ac94-c14b4718c077" containerName="extract" Dec 08 19:41:37 crc kubenswrapper[5120]: I1208 19:41:37.811228 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="70af827c-5e89-4676-aa30-d13e0b7a4ca5" containerName="registry-server" Dec 08 19:41:38 crc kubenswrapper[5120]: I1208 19:41:38.969298 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b"] Dec 08 19:41:38 crc kubenswrapper[5120]: I1208 19:41:38.969453 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll"] Dec 08 19:41:38 crc kubenswrapper[5120]: I1208 19:41:38.969527 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:38 crc kubenswrapper[5120]: I1208 19:41:38.972158 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.013478 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll"] Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.013536 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.064132 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.064271 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.064310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.064369 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7fmf\" (UniqueName: \"kubernetes.io/projected/b41a70db-cf8d-429d-bd35-f11d1774752b-kube-api-access-p7fmf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.064422 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.064496 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6fg\" (UniqueName: \"kubernetes.io/projected/6f00c842-ebc9-496d-8e6e-643c63166626-kube-api-access-gn6fg\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.166345 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.166923 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.166972 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.167043 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7fmf\" (UniqueName: \"kubernetes.io/projected/b41a70db-cf8d-429d-bd35-f11d1774752b-kube-api-access-p7fmf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.167100 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.167188 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6fg\" (UniqueName: \"kubernetes.io/projected/6f00c842-ebc9-496d-8e6e-643c63166626-kube-api-access-gn6fg\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.167276 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.167491 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.167730 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.168019 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.198740 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6fg\" (UniqueName: \"kubernetes.io/projected/6f00c842-ebc9-496d-8e6e-643c63166626-kube-api-access-gn6fg\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.199083 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7fmf\" (UniqueName: \"kubernetes.io/projected/b41a70db-cf8d-429d-bd35-f11d1774752b-kube-api-access-p7fmf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.301822 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.343090 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.522892 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b"] Dec 08 19:41:39 crc kubenswrapper[5120]: I1208 19:41:39.617440 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll"] Dec 08 19:41:39 crc kubenswrapper[5120]: W1208 19:41:39.622294 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f00c842_ebc9_496d_8e6e_643c63166626.slice/crio-ca98eb0f166aeaa6ca1f802f38d7b2c01304a2034d178078bff5f35abe602f21 WatchSource:0}: Error finding container ca98eb0f166aeaa6ca1f802f38d7b2c01304a2034d178078bff5f35abe602f21: Status 404 returned error can't find the container with id ca98eb0f166aeaa6ca1f802f38d7b2c01304a2034d178078bff5f35abe602f21 Dec 08 19:41:40 crc kubenswrapper[5120]: I1208 19:41:40.357402 5120 generic.go:358] "Generic (PLEG): container finished" podID="6f00c842-ebc9-496d-8e6e-643c63166626" containerID="0052dfae380b5e56f53ca22cbb0475b7b8bc973c5cff856c404a445c00829f7e" exitCode=0 Dec 08 19:41:40 crc kubenswrapper[5120]: I1208 19:41:40.357507 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" event={"ID":"6f00c842-ebc9-496d-8e6e-643c63166626","Type":"ContainerDied","Data":"0052dfae380b5e56f53ca22cbb0475b7b8bc973c5cff856c404a445c00829f7e"} Dec 08 19:41:40 crc kubenswrapper[5120]: I1208 19:41:40.357535 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" event={"ID":"6f00c842-ebc9-496d-8e6e-643c63166626","Type":"ContainerStarted","Data":"ca98eb0f166aeaa6ca1f802f38d7b2c01304a2034d178078bff5f35abe602f21"} Dec 08 19:41:40 crc kubenswrapper[5120]: I1208 19:41:40.360314 5120 generic.go:358] "Generic (PLEG): container finished" podID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerID="11df0918a0a7d050fac7f8be0a76e9e7df0fccfb02ec7c42c4184a69493866d8" exitCode=0 Dec 08 19:41:40 crc kubenswrapper[5120]: I1208 19:41:40.360371 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" event={"ID":"b41a70db-cf8d-429d-bd35-f11d1774752b","Type":"ContainerDied","Data":"11df0918a0a7d050fac7f8be0a76e9e7df0fccfb02ec7c42c4184a69493866d8"} Dec 08 19:41:40 crc kubenswrapper[5120]: I1208 19:41:40.360392 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" event={"ID":"b41a70db-cf8d-429d-bd35-f11d1774752b","Type":"ContainerStarted","Data":"657146144dd6921c7158da13df31c464efdc1a95cdf4acff04fdd3fb63afef50"} Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.190114 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-swmfh"] Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.223232 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swmfh"] Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.223480 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.270263 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smdsr\" (UniqueName: \"kubernetes.io/projected/afbb84f6-0831-4c68-9cfa-1fb9316bda93-kube-api-access-smdsr\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.270392 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-utilities\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.270421 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-catalog-content\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.371379 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smdsr\" (UniqueName: \"kubernetes.io/projected/afbb84f6-0831-4c68-9cfa-1fb9316bda93-kube-api-access-smdsr\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.371662 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-utilities\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.371683 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-catalog-content\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.372120 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-catalog-content\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.372391 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-utilities\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.449153 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smdsr\" (UniqueName: \"kubernetes.io/projected/afbb84f6-0831-4c68-9cfa-1fb9316bda93-kube-api-access-smdsr\") pod \"certified-operators-swmfh\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:42 crc kubenswrapper[5120]: I1208 19:41:42.582876 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.402412 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b"] Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.623922 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b"] Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.623966 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swmfh"] Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.624122 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.698048 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64pg2\" (UniqueName: \"kubernetes.io/projected/289827c9-3e57-496d-92f8-07197e56ff6e-kube-api-access-64pg2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.698189 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.698258 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: W1208 19:41:43.716033 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbb84f6_0831_4c68_9cfa_1fb9316bda93.slice/crio-d0c41802a6149b9b0d75a5948bb0fb9e97a3fee504285663e8b8714d1d512265 WatchSource:0}: Error finding container d0c41802a6149b9b0d75a5948bb0fb9e97a3fee504285663e8b8714d1d512265: Status 404 returned error can't find the container with id d0c41802a6149b9b0d75a5948bb0fb9e97a3fee504285663e8b8714d1d512265 Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.799617 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.799938 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-64pg2\" (UniqueName: \"kubernetes.io/projected/289827c9-3e57-496d-92f8-07197e56ff6e-kube-api-access-64pg2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.799992 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.800301 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.800452 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.825612 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-64pg2\" (UniqueName: \"kubernetes.io/projected/289827c9-3e57-496d-92f8-07197e56ff6e-kube-api-access-64pg2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:43 crc kubenswrapper[5120]: I1208 19:41:43.941561 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:41:44 crc kubenswrapper[5120]: I1208 19:41:44.380918 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swmfh" event={"ID":"afbb84f6-0831-4c68-9cfa-1fb9316bda93","Type":"ContainerStarted","Data":"d0c41802a6149b9b0d75a5948bb0fb9e97a3fee504285663e8b8714d1d512265"} Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.070417 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b"] Dec 08 19:41:47 crc kubenswrapper[5120]: W1208 19:41:47.073922 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod289827c9_3e57_496d_92f8_07197e56ff6e.slice/crio-3b74b9552219d7cc63f9f079537fbcadfbb577a5f70eb92a31eb192add182383 WatchSource:0}: Error finding container 3b74b9552219d7cc63f9f079537fbcadfbb577a5f70eb92a31eb192add182383: Status 404 returned error can't find the container with id 3b74b9552219d7cc63f9f079537fbcadfbb577a5f70eb92a31eb192add182383 Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.399483 5120 generic.go:358] "Generic (PLEG): container finished" podID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerID="86bc50a9005d1bb5a88309c0d08172c4ec86b1fe5d7f05a3b15257b112f9d02a" exitCode=0 Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.399575 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" event={"ID":"b41a70db-cf8d-429d-bd35-f11d1774752b","Type":"ContainerDied","Data":"86bc50a9005d1bb5a88309c0d08172c4ec86b1fe5d7f05a3b15257b112f9d02a"} Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.402026 5120 generic.go:358] "Generic (PLEG): container finished" podID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerID="94d9cb1c82c2f6934ed49f071cbf5319a650f6130e8b7dc5655fbf10815e1d13" exitCode=0 Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.402220 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swmfh" event={"ID":"afbb84f6-0831-4c68-9cfa-1fb9316bda93","Type":"ContainerDied","Data":"94d9cb1c82c2f6934ed49f071cbf5319a650f6130e8b7dc5655fbf10815e1d13"} Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.408092 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" event={"ID":"6f00c842-ebc9-496d-8e6e-643c63166626","Type":"ContainerStarted","Data":"41891b28298ceac3d481bd76c07a51ece863871d76deffc234b4b2b5e98bee96"} Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.414442 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" event={"ID":"289827c9-3e57-496d-92f8-07197e56ff6e","Type":"ContainerStarted","Data":"3b74b9552219d7cc63f9f079537fbcadfbb577a5f70eb92a31eb192add182383"} Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.423639 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-fqm2w"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.427111 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.432736 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-4n68r\"" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.433464 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.433633 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.437067 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-fqm2w"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.558290 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.617467 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bb7\" (UniqueName: \"kubernetes.io/projected/c729374c-925c-4808-ab08-f6f7c1ef0f8a-kube-api-access-q8bb7\") pod \"obo-prometheus-operator-86648f486b-fqm2w\" (UID: \"c729374c-925c-4808-ab08-f6f7c1ef0f8a\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.719180 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bb7\" (UniqueName: \"kubernetes.io/projected/c729374c-925c-4808-ab08-f6f7c1ef0f8a-kube-api-access-q8bb7\") pod \"obo-prometheus-operator-86648f486b-fqm2w\" (UID: \"c729374c-925c-4808-ab08-f6f7c1ef0f8a\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.719802 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.719950 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.722629 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-sppjn\"" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.723508 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.723977 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.724016 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.724282 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.738657 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-2zxxb"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.743768 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-2zxxb"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.744063 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.746262 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.746772 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-p8lzr\"" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.749133 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bb7\" (UniqueName: \"kubernetes.io/projected/c729374c-925c-4808-ab08-f6f7c1ef0f8a-kube-api-access-q8bb7\") pod \"obo-prometheus-operator-86648f486b-fqm2w\" (UID: \"c729374c-925c-4808-ab08-f6f7c1ef0f8a\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.751728 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.784376 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-tzkn2"] Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.820219 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2764c5fb-6e2e-4b86-965e-b43f7c6e510a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-xkvqs\" (UID: \"2764c5fb-6e2e-4b86-965e-b43f7c6e510a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.820266 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2764c5fb-6e2e-4b86-965e-b43f7c6e510a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-xkvqs\" (UID: \"2764c5fb-6e2e-4b86-965e-b43f7c6e510a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.820288 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d42d0baf-0dee-4327-835b-d61c28032b62-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-nw2wn\" (UID: \"d42d0baf-0dee-4327-835b-d61c28032b62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.820440 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d42d0baf-0dee-4327-835b-d61c28032b62-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-nw2wn\" (UID: \"d42d0baf-0dee-4327-835b-d61c28032b62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.921791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2764c5fb-6e2e-4b86-965e-b43f7c6e510a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-xkvqs\" (UID: \"2764c5fb-6e2e-4b86-965e-b43f7c6e510a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.921845 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2764c5fb-6e2e-4b86-965e-b43f7c6e510a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-xkvqs\" (UID: \"2764c5fb-6e2e-4b86-965e-b43f7c6e510a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.921866 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d42d0baf-0dee-4327-835b-d61c28032b62-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-nw2wn\" (UID: \"d42d0baf-0dee-4327-835b-d61c28032b62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.921904 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzcq\" (UniqueName: \"kubernetes.io/projected/d080a6af-d35c-46d0-9580-438838771692-kube-api-access-zrzcq\") pod \"observability-operator-78c97476f4-2zxxb\" (UID: \"d080a6af-d35c-46d0-9580-438838771692\") " pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.921964 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d080a6af-d35c-46d0-9580-438838771692-observability-operator-tls\") pod \"observability-operator-78c97476f4-2zxxb\" (UID: \"d080a6af-d35c-46d0-9580-438838771692\") " pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.921997 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d42d0baf-0dee-4327-835b-d61c28032b62-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-nw2wn\" (UID: \"d42d0baf-0dee-4327-835b-d61c28032b62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.929588 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d42d0baf-0dee-4327-835b-d61c28032b62-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-nw2wn\" (UID: \"d42d0baf-0dee-4327-835b-d61c28032b62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.931944 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2764c5fb-6e2e-4b86-965e-b43f7c6e510a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-xkvqs\" (UID: \"2764c5fb-6e2e-4b86-965e-b43f7c6e510a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.932629 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d42d0baf-0dee-4327-835b-d61c28032b62-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-nw2wn\" (UID: \"d42d0baf-0dee-4327-835b-d61c28032b62\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:47 crc kubenswrapper[5120]: I1208 19:41:47.943713 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2764c5fb-6e2e-4b86-965e-b43f7c6e510a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d54446746-xkvqs\" (UID: \"2764c5fb-6e2e-4b86-965e-b43f7c6e510a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.017022 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-tzkn2"] Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.017206 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.023787 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-bpvbl\"" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.023814 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d080a6af-d35c-46d0-9580-438838771692-observability-operator-tls\") pod \"observability-operator-78c97476f4-2zxxb\" (UID: \"d080a6af-d35c-46d0-9580-438838771692\") " pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.023902 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrzcq\" (UniqueName: \"kubernetes.io/projected/d080a6af-d35c-46d0-9580-438838771692-kube-api-access-zrzcq\") pod \"observability-operator-78c97476f4-2zxxb\" (UID: \"d080a6af-d35c-46d0-9580-438838771692\") " pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.028059 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d080a6af-d35c-46d0-9580-438838771692-observability-operator-tls\") pod \"observability-operator-78c97476f4-2zxxb\" (UID: \"d080a6af-d35c-46d0-9580-438838771692\") " pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.051504 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.072358 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrzcq\" (UniqueName: \"kubernetes.io/projected/d080a6af-d35c-46d0-9580-438838771692-kube-api-access-zrzcq\") pod \"observability-operator-78c97476f4-2zxxb\" (UID: \"d080a6af-d35c-46d0-9580-438838771692\") " pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.103699 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.118516 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.121828 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-fqm2w"] Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.125087 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmm7j\" (UniqueName: \"kubernetes.io/projected/b17e2bc2-7fa8-48d5-be84-6033bca55151-kube-api-access-cmm7j\") pod \"perses-operator-68bdb49cbf-tzkn2\" (UID: \"b17e2bc2-7fa8-48d5-be84-6033bca55151\") " pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.125255 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b17e2bc2-7fa8-48d5-be84-6033bca55151-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-tzkn2\" (UID: \"b17e2bc2-7fa8-48d5-be84-6033bca55151\") " pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.226441 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b17e2bc2-7fa8-48d5-be84-6033bca55151-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-tzkn2\" (UID: \"b17e2bc2-7fa8-48d5-be84-6033bca55151\") " pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.226522 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmm7j\" (UniqueName: \"kubernetes.io/projected/b17e2bc2-7fa8-48d5-be84-6033bca55151-kube-api-access-cmm7j\") pod \"perses-operator-68bdb49cbf-tzkn2\" (UID: \"b17e2bc2-7fa8-48d5-be84-6033bca55151\") " pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.227637 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b17e2bc2-7fa8-48d5-be84-6033bca55151-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-tzkn2\" (UID: \"b17e2bc2-7fa8-48d5-be84-6033bca55151\") " pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.249536 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmm7j\" (UniqueName: \"kubernetes.io/projected/b17e2bc2-7fa8-48d5-be84-6033bca55151-kube-api-access-cmm7j\") pod \"perses-operator-68bdb49cbf-tzkn2\" (UID: \"b17e2bc2-7fa8-48d5-be84-6033bca55151\") " pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.339280 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.392518 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs"] Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.433430 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" event={"ID":"2764c5fb-6e2e-4b86-965e-b43f7c6e510a","Type":"ContainerStarted","Data":"7f13f8197b68da93640129b485ff40f40ad766686e8194998261e8948cfdb42e"} Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.436525 5120 generic.go:358] "Generic (PLEG): container finished" podID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerID="b2d82bd1bcceb5447fb941a18149df3175fb61d9366d71a119c52300cfc11358" exitCode=0 Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.436686 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" event={"ID":"b41a70db-cf8d-429d-bd35-f11d1774752b","Type":"ContainerDied","Data":"b2d82bd1bcceb5447fb941a18149df3175fb61d9366d71a119c52300cfc11358"} Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.456887 5120 generic.go:358] "Generic (PLEG): container finished" podID="6f00c842-ebc9-496d-8e6e-643c63166626" containerID="41891b28298ceac3d481bd76c07a51ece863871d76deffc234b4b2b5e98bee96" exitCode=0 Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.456976 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" event={"ID":"6f00c842-ebc9-496d-8e6e-643c63166626","Type":"ContainerDied","Data":"41891b28298ceac3d481bd76c07a51ece863871d76deffc234b4b2b5e98bee96"} Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.463140 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" event={"ID":"c729374c-925c-4808-ab08-f6f7c1ef0f8a","Type":"ContainerStarted","Data":"874bc493f4ee5766524639cea42be31203af2343bc7a2baf687f893e851e3b76"} Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.466993 5120 generic.go:358] "Generic (PLEG): container finished" podID="289827c9-3e57-496d-92f8-07197e56ff6e" containerID="1bbc42a90ac87fbd7e36a363ede536259531eba763fd3d971a09dda77e046c29" exitCode=0 Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.467063 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" event={"ID":"289827c9-3e57-496d-92f8-07197e56ff6e","Type":"ContainerDied","Data":"1bbc42a90ac87fbd7e36a363ede536259531eba763fd3d971a09dda77e046c29"} Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.492187 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn"] Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.576111 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-tzkn2"] Dec 08 19:41:48 crc kubenswrapper[5120]: W1208 19:41:48.587048 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb17e2bc2_7fa8_48d5_be84_6033bca55151.slice/crio-0b81aff48458acf24a7d86accc586a23c210740e898707771d2e7b0c2c39f956 WatchSource:0}: Error finding container 0b81aff48458acf24a7d86accc586a23c210740e898707771d2e7b0c2c39f956: Status 404 returned error can't find the container with id 0b81aff48458acf24a7d86accc586a23c210740e898707771d2e7b0c2c39f956 Dec 08 19:41:48 crc kubenswrapper[5120]: I1208 19:41:48.652896 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-2zxxb"] Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.483654 5120 generic.go:358] "Generic (PLEG): container finished" podID="6f00c842-ebc9-496d-8e6e-643c63166626" containerID="ac7258db268122681befc08f0e1a8738ed3e4a87b46ffe8274cfde12d548418a" exitCode=0 Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.483755 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" event={"ID":"6f00c842-ebc9-496d-8e6e-643c63166626","Type":"ContainerDied","Data":"ac7258db268122681befc08f0e1a8738ed3e4a87b46ffe8274cfde12d548418a"} Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.491574 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" event={"ID":"b17e2bc2-7fa8-48d5-be84-6033bca55151","Type":"ContainerStarted","Data":"0b81aff48458acf24a7d86accc586a23c210740e898707771d2e7b0c2c39f956"} Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.498031 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" event={"ID":"d42d0baf-0dee-4327-835b-d61c28032b62","Type":"ContainerStarted","Data":"2b85d73aea9df5a9f51f8764c404f5dea99834066a079382b251de49f14028cd"} Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.501116 5120 generic.go:358] "Generic (PLEG): container finished" podID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerID="eceeff7cfcb613bafdc00c10cb81fe25e5d0677c1a6f31e773584987634a7e76" exitCode=0 Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.501197 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swmfh" event={"ID":"afbb84f6-0831-4c68-9cfa-1fb9316bda93","Type":"ContainerDied","Data":"eceeff7cfcb613bafdc00c10cb81fe25e5d0677c1a6f31e773584987634a7e76"} Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.504224 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-2zxxb" event={"ID":"d080a6af-d35c-46d0-9580-438838771692","Type":"ContainerStarted","Data":"94f65d1ffd2cb80dc6bc5b64ec0f76a2829e74a18e43388b93b49e8d2057636e"} Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.837621 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.968445 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7fmf\" (UniqueName: \"kubernetes.io/projected/b41a70db-cf8d-429d-bd35-f11d1774752b-kube-api-access-p7fmf\") pod \"b41a70db-cf8d-429d-bd35-f11d1774752b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.968572 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-bundle\") pod \"b41a70db-cf8d-429d-bd35-f11d1774752b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.968615 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-util\") pod \"b41a70db-cf8d-429d-bd35-f11d1774752b\" (UID: \"b41a70db-cf8d-429d-bd35-f11d1774752b\") " Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.974349 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-bundle" (OuterVolumeSpecName: "bundle") pod "b41a70db-cf8d-429d-bd35-f11d1774752b" (UID: "b41a70db-cf8d-429d-bd35-f11d1774752b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.978423 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41a70db-cf8d-429d-bd35-f11d1774752b-kube-api-access-p7fmf" (OuterVolumeSpecName: "kube-api-access-p7fmf") pod "b41a70db-cf8d-429d-bd35-f11d1774752b" (UID: "b41a70db-cf8d-429d-bd35-f11d1774752b"). InnerVolumeSpecName "kube-api-access-p7fmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:49 crc kubenswrapper[5120]: I1208 19:41:49.982786 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-util" (OuterVolumeSpecName: "util") pod "b41a70db-cf8d-429d-bd35-f11d1774752b" (UID: "b41a70db-cf8d-429d-bd35-f11d1774752b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.069671 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.069709 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b41a70db-cf8d-429d-bd35-f11d1774752b-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.069718 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7fmf\" (UniqueName: \"kubernetes.io/projected/b41a70db-cf8d-429d-bd35-f11d1774752b-kube-api-access-p7fmf\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.287750 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-nmdp7" Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.377674 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cdcv9"] Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.526505 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" event={"ID":"b41a70db-cf8d-429d-bd35-f11d1774752b","Type":"ContainerDied","Data":"657146144dd6921c7158da13df31c464efdc1a95cdf4acff04fdd3fb63afef50"} Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.526551 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657146144dd6921c7158da13df31c464efdc1a95cdf4acff04fdd3fb63afef50" Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.526677 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e6447b" Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.545122 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swmfh" event={"ID":"afbb84f6-0831-4c68-9cfa-1fb9316bda93","Type":"ContainerStarted","Data":"6d3c8eb9986c91b53af7d8ff2e2ffcf00bcb4d4b6e3fdfc832a38da8e4a96a7e"} Dec 08 19:41:50 crc kubenswrapper[5120]: I1208 19:41:50.569608 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-swmfh" podStartSLOduration=7.608633681 podStartE2EDuration="8.569590876s" podCreationTimestamp="2025-12-08 19:41:42 +0000 UTC" firstStartedPulling="2025-12-08 19:41:47.402861802 +0000 UTC m=+760.074968441" lastFinishedPulling="2025-12-08 19:41:48.363818987 +0000 UTC m=+761.035925636" observedRunningTime="2025-12-08 19:41:50.567892193 +0000 UTC m=+763.239998842" watchObservedRunningTime="2025-12-08 19:41:50.569590876 +0000 UTC m=+763.241697525" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.050182 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153038 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5rrg7"] Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153834 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f00c842-ebc9-496d-8e6e-643c63166626" containerName="extract" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153855 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f00c842-ebc9-496d-8e6e-643c63166626" containerName="extract" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153868 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f00c842-ebc9-496d-8e6e-643c63166626" containerName="util" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153877 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f00c842-ebc9-496d-8e6e-643c63166626" containerName="util" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153889 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerName="extract" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153896 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerName="extract" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153925 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerName="util" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153932 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerName="util" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153944 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerName="pull" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153951 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerName="pull" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153969 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6f00c842-ebc9-496d-8e6e-643c63166626" containerName="pull" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.153976 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f00c842-ebc9-496d-8e6e-643c63166626" containerName="pull" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.154097 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b41a70db-cf8d-429d-bd35-f11d1774752b" containerName="extract" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.154111 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6f00c842-ebc9-496d-8e6e-643c63166626" containerName="extract" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.197685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn6fg\" (UniqueName: \"kubernetes.io/projected/6f00c842-ebc9-496d-8e6e-643c63166626-kube-api-access-gn6fg\") pod \"6f00c842-ebc9-496d-8e6e-643c63166626\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.198021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-bundle\") pod \"6f00c842-ebc9-496d-8e6e-643c63166626\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.198090 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-util\") pod \"6f00c842-ebc9-496d-8e6e-643c63166626\" (UID: \"6f00c842-ebc9-496d-8e6e-643c63166626\") " Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.199069 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-bundle" (OuterVolumeSpecName: "bundle") pod "6f00c842-ebc9-496d-8e6e-643c63166626" (UID: "6f00c842-ebc9-496d-8e6e-643c63166626"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.203798 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f00c842-ebc9-496d-8e6e-643c63166626-kube-api-access-gn6fg" (OuterVolumeSpecName: "kube-api-access-gn6fg") pod "6f00c842-ebc9-496d-8e6e-643c63166626" (UID: "6f00c842-ebc9-496d-8e6e-643c63166626"). InnerVolumeSpecName "kube-api-access-gn6fg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.222031 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-util" (OuterVolumeSpecName: "util") pod "6f00c842-ebc9-496d-8e6e-643c63166626" (UID: "6f00c842-ebc9-496d-8e6e-643c63166626"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.299831 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gn6fg\" (UniqueName: \"kubernetes.io/projected/6f00c842-ebc9-496d-8e6e-643c63166626-kube-api-access-gn6fg\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.299870 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.299880 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f00c842-ebc9-496d-8e6e-643c63166626-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.456133 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rrg7"] Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.456336 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.505104 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-utilities\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.505151 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm467\" (UniqueName: \"kubernetes.io/projected/1f2b07b0-6291-412a-b2ac-55ad46205d88-kube-api-access-zm467\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.505238 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-catalog-content\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.575668 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.575743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8flnpll" event={"ID":"6f00c842-ebc9-496d-8e6e-643c63166626","Type":"ContainerDied","Data":"ca98eb0f166aeaa6ca1f802f38d7b2c01304a2034d178078bff5f35abe602f21"} Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.575801 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca98eb0f166aeaa6ca1f802f38d7b2c01304a2034d178078bff5f35abe602f21" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.606015 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-catalog-content\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.606131 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-utilities\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.606156 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zm467\" (UniqueName: \"kubernetes.io/projected/1f2b07b0-6291-412a-b2ac-55ad46205d88-kube-api-access-zm467\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.606491 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-catalog-content\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.606823 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-utilities\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.626464 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm467\" (UniqueName: \"kubernetes.io/projected/1f2b07b0-6291-412a-b2ac-55ad46205d88-kube-api-access-zm467\") pod \"community-operators-5rrg7\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:51 crc kubenswrapper[5120]: I1208 19:41:51.775979 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:41:52 crc kubenswrapper[5120]: I1208 19:41:52.138431 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rrg7"] Dec 08 19:41:52 crc kubenswrapper[5120]: I1208 19:41:52.583112 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:52 crc kubenswrapper[5120]: I1208 19:41:52.583180 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:52 crc kubenswrapper[5120]: I1208 19:41:52.584865 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rrg7" event={"ID":"1f2b07b0-6291-412a-b2ac-55ad46205d88","Type":"ContainerStarted","Data":"548f51e1a405e255b2de68e829024f119d23a58eef897d9d2d10637a3e91a1cb"} Dec 08 19:41:52 crc kubenswrapper[5120]: I1208 19:41:52.669961 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:41:53 crc kubenswrapper[5120]: I1208 19:41:53.034545 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:41:53 crc kubenswrapper[5120]: I1208 19:41:53.034626 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:41:53 crc kubenswrapper[5120]: I1208 19:41:53.605660 5120 generic.go:358] "Generic (PLEG): container finished" podID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerID="4361e84f5657e4b0ea4dac64652c8c11f253d2e792b09515122c2a9bfbc6c301" exitCode=0 Dec 08 19:41:53 crc kubenswrapper[5120]: I1208 19:41:53.605908 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rrg7" event={"ID":"1f2b07b0-6291-412a-b2ac-55ad46205d88","Type":"ContainerDied","Data":"4361e84f5657e4b0ea4dac64652c8c11f253d2e792b09515122c2a9bfbc6c301"} Dec 08 19:41:57 crc kubenswrapper[5120]: I1208 19:41:57.985502 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-c42kv"] Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.010714 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-c42kv"] Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.010846 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.012838 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.013277 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.014560 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-9nzrr\"" Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.108996 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q94p\" (UniqueName: \"kubernetes.io/projected/0517e6a1-4802-446c-865d-da4935151902-kube-api-access-9q94p\") pod \"interconnect-operator-78b9bd8798-c42kv\" (UID: \"0517e6a1-4802-446c-865d-da4935151902\") " pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.211924 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9q94p\" (UniqueName: \"kubernetes.io/projected/0517e6a1-4802-446c-865d-da4935151902-kube-api-access-9q94p\") pod \"interconnect-operator-78b9bd8798-c42kv\" (UID: \"0517e6a1-4802-446c-865d-da4935151902\") " pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.246071 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q94p\" (UniqueName: \"kubernetes.io/projected/0517e6a1-4802-446c-865d-da4935151902-kube-api-access-9q94p\") pod \"interconnect-operator-78b9bd8798-c42kv\" (UID: \"0517e6a1-4802-446c-865d-da4935151902\") " pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" Dec 08 19:41:58 crc kubenswrapper[5120]: I1208 19:41:58.381768 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" Dec 08 19:42:00 crc kubenswrapper[5120]: I1208 19:42:00.405450 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-784954fb9d-dbmj9"] Dec 08 19:42:00 crc kubenswrapper[5120]: I1208 19:42:00.919992 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-784954fb9d-dbmj9"] Dec 08 19:42:00 crc kubenswrapper[5120]: I1208 19:42:00.920749 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:00 crc kubenswrapper[5120]: I1208 19:42:00.922454 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-2bpp6\"" Dec 08 19:42:00 crc kubenswrapper[5120]: I1208 19:42:00.922741 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.045286 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a54f49-d066-4a72-8ef6-e6ba70de9fec-webhook-cert\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.045376 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5lb\" (UniqueName: \"kubernetes.io/projected/10a54f49-d066-4a72-8ef6-e6ba70de9fec-kube-api-access-bl5lb\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.045416 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a54f49-d066-4a72-8ef6-e6ba70de9fec-apiservice-cert\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.147084 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a54f49-d066-4a72-8ef6-e6ba70de9fec-webhook-cert\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.147181 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5lb\" (UniqueName: \"kubernetes.io/projected/10a54f49-d066-4a72-8ef6-e6ba70de9fec-kube-api-access-bl5lb\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.147234 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a54f49-d066-4a72-8ef6-e6ba70de9fec-apiservice-cert\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.154947 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a54f49-d066-4a72-8ef6-e6ba70de9fec-webhook-cert\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.161746 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a54f49-d066-4a72-8ef6-e6ba70de9fec-apiservice-cert\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.167531 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5lb\" (UniqueName: \"kubernetes.io/projected/10a54f49-d066-4a72-8ef6-e6ba70de9fec-kube-api-access-bl5lb\") pod \"elastic-operator-784954fb9d-dbmj9\" (UID: \"10a54f49-d066-4a72-8ef6-e6ba70de9fec\") " pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:01 crc kubenswrapper[5120]: I1208 19:42:01.246564 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" Dec 08 19:42:03 crc kubenswrapper[5120]: I1208 19:42:03.650281 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:42:05 crc kubenswrapper[5120]: I1208 19:42:05.346610 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swmfh"] Dec 08 19:42:05 crc kubenswrapper[5120]: I1208 19:42:05.347139 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-swmfh" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="registry-server" containerID="cri-o://6d3c8eb9986c91b53af7d8ff2e2ffcf00bcb4d4b6e3fdfc832a38da8e4a96a7e" gracePeriod=2 Dec 08 19:42:05 crc kubenswrapper[5120]: I1208 19:42:05.688403 5120 generic.go:358] "Generic (PLEG): container finished" podID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerID="6d3c8eb9986c91b53af7d8ff2e2ffcf00bcb4d4b6e3fdfc832a38da8e4a96a7e" exitCode=0 Dec 08 19:42:05 crc kubenswrapper[5120]: I1208 19:42:05.688484 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swmfh" event={"ID":"afbb84f6-0831-4c68-9cfa-1fb9316bda93","Type":"ContainerDied","Data":"6d3c8eb9986c91b53af7d8ff2e2ffcf00bcb4d4b6e3fdfc832a38da8e4a96a7e"} Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.309610 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.461062 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smdsr\" (UniqueName: \"kubernetes.io/projected/afbb84f6-0831-4c68-9cfa-1fb9316bda93-kube-api-access-smdsr\") pod \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.461133 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-catalog-content\") pod \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.461249 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-utilities\") pod \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\" (UID: \"afbb84f6-0831-4c68-9cfa-1fb9316bda93\") " Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.462610 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-utilities" (OuterVolumeSpecName: "utilities") pod "afbb84f6-0831-4c68-9cfa-1fb9316bda93" (UID: "afbb84f6-0831-4c68-9cfa-1fb9316bda93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.474106 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbb84f6-0831-4c68-9cfa-1fb9316bda93-kube-api-access-smdsr" (OuterVolumeSpecName: "kube-api-access-smdsr") pod "afbb84f6-0831-4c68-9cfa-1fb9316bda93" (UID: "afbb84f6-0831-4c68-9cfa-1fb9316bda93"). InnerVolumeSpecName "kube-api-access-smdsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.533192 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afbb84f6-0831-4c68-9cfa-1fb9316bda93" (UID: "afbb84f6-0831-4c68-9cfa-1fb9316bda93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.564913 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-smdsr\" (UniqueName: \"kubernetes.io/projected/afbb84f6-0831-4c68-9cfa-1fb9316bda93-kube-api-access-smdsr\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.564936 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.564946 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afbb84f6-0831-4c68-9cfa-1fb9316bda93-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.651883 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-784954fb9d-dbmj9"] Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.713233 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" event={"ID":"10a54f49-d066-4a72-8ef6-e6ba70de9fec","Type":"ContainerStarted","Data":"2b48a6f8235a734fdab948597a308092e744bf98e436b2a94486a43de3a42c5d"} Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.715210 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swmfh" event={"ID":"afbb84f6-0831-4c68-9cfa-1fb9316bda93","Type":"ContainerDied","Data":"d0c41802a6149b9b0d75a5948bb0fb9e97a3fee504285663e8b8714d1d512265"} Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.715246 5120 scope.go:117] "RemoveContainer" containerID="6d3c8eb9986c91b53af7d8ff2e2ffcf00bcb4d4b6e3fdfc832a38da8e4a96a7e" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.715429 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swmfh" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.738991 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swmfh"] Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.744907 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-swmfh"] Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.777184 5120 scope.go:117] "RemoveContainer" containerID="eceeff7cfcb613bafdc00c10cb81fe25e5d0677c1a6f31e773584987634a7e76" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.812521 5120 scope.go:117] "RemoveContainer" containerID="94d9cb1c82c2f6934ed49f071cbf5319a650f6130e8b7dc5655fbf10815e1d13" Dec 08 19:42:09 crc kubenswrapper[5120]: I1208 19:42:09.826918 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-c42kv"] Dec 08 19:42:09 crc kubenswrapper[5120]: W1208 19:42:09.873435 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0517e6a1_4802_446c_865d_da4935151902.slice/crio-9b348fa11181e9e6b6e7c0750e42c8b847cecbbd29248fb0b94ef7f083f69ff6 WatchSource:0}: Error finding container 9b348fa11181e9e6b6e7c0750e42c8b847cecbbd29248fb0b94ef7f083f69ff6: Status 404 returned error can't find the container with id 9b348fa11181e9e6b6e7c0750e42c8b847cecbbd29248fb0b94ef7f083f69ff6 Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.724132 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-2zxxb" event={"ID":"d080a6af-d35c-46d0-9580-438838771692","Type":"ContainerStarted","Data":"cf94e01096522a77927ed4bfd45adcaad8ccfa4992904988685fcdf7d6a673e1"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.724722 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.728069 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-2zxxb" Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.728747 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" event={"ID":"c729374c-925c-4808-ab08-f6f7c1ef0f8a","Type":"ContainerStarted","Data":"c5c1bd2b5d0c1aa7f667e41e0723af8722c975e889662d4c77425f2100689e27"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.731117 5120 generic.go:358] "Generic (PLEG): container finished" podID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerID="67c6810ce2596de6f39be0db09db5bab2ba92bccf732b939ef3e4cbc8dc51e8f" exitCode=0 Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.731151 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rrg7" event={"ID":"1f2b07b0-6291-412a-b2ac-55ad46205d88","Type":"ContainerDied","Data":"67c6810ce2596de6f39be0db09db5bab2ba92bccf732b939ef3e4cbc8dc51e8f"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.732469 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" event={"ID":"0517e6a1-4802-446c-865d-da4935151902","Type":"ContainerStarted","Data":"9b348fa11181e9e6b6e7c0750e42c8b847cecbbd29248fb0b94ef7f083f69ff6"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.736333 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" event={"ID":"b17e2bc2-7fa8-48d5-be84-6033bca55151","Type":"ContainerStarted","Data":"cda8f3981933ee3818412228c799d01255539f2d40c887270b983dfa60191119"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.736524 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.738139 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" event={"ID":"d42d0baf-0dee-4327-835b-d61c28032b62","Type":"ContainerStarted","Data":"22673c86847d22b11e3d8e0fd719914a794d58647497482c754afc1e700312d8"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.740968 5120 generic.go:358] "Generic (PLEG): container finished" podID="289827c9-3e57-496d-92f8-07197e56ff6e" containerID="4b086bf36dcc2281109a86d140e2ca921b3b1cb97845d4b9195ce4384a0f7412" exitCode=0 Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.741056 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" event={"ID":"289827c9-3e57-496d-92f8-07197e56ff6e","Type":"ContainerDied","Data":"4b086bf36dcc2281109a86d140e2ca921b3b1cb97845d4b9195ce4384a0f7412"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.743277 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-2zxxb" podStartSLOduration=3.01982063 podStartE2EDuration="23.743266823s" podCreationTimestamp="2025-12-08 19:41:47 +0000 UTC" firstStartedPulling="2025-12-08 19:41:48.664253308 +0000 UTC m=+761.336359957" lastFinishedPulling="2025-12-08 19:42:09.387699501 +0000 UTC m=+782.059806150" observedRunningTime="2025-12-08 19:42:10.742957843 +0000 UTC m=+783.415064502" watchObservedRunningTime="2025-12-08 19:42:10.743266823 +0000 UTC m=+783.415373472" Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.744220 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" event={"ID":"2764c5fb-6e2e-4b86-965e-b43f7c6e510a","Type":"ContainerStarted","Data":"af62b4744e3bfeb753a87faf6fffebe3182d1a528a63416cc7e34ff5c88486b1"} Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.781925 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-fqm2w" podStartSLOduration=2.650871047 podStartE2EDuration="23.781909175s" podCreationTimestamp="2025-12-08 19:41:47 +0000 UTC" firstStartedPulling="2025-12-08 19:41:48.175205616 +0000 UTC m=+760.847312265" lastFinishedPulling="2025-12-08 19:42:09.306243744 +0000 UTC m=+781.978350393" observedRunningTime="2025-12-08 19:42:10.775148484 +0000 UTC m=+783.447255133" watchObservedRunningTime="2025-12-08 19:42:10.781909175 +0000 UTC m=+783.454015824" Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.800794 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" podStartSLOduration=3.097922131 podStartE2EDuration="23.800773998s" podCreationTimestamp="2025-12-08 19:41:47 +0000 UTC" firstStartedPulling="2025-12-08 19:41:48.59267055 +0000 UTC m=+761.264777199" lastFinishedPulling="2025-12-08 19:42:09.295522407 +0000 UTC m=+781.967629066" observedRunningTime="2025-12-08 19:42:10.798518777 +0000 UTC m=+783.470625426" watchObservedRunningTime="2025-12-08 19:42:10.800773998 +0000 UTC m=+783.472880647" Dec 08 19:42:10 crc kubenswrapper[5120]: I1208 19:42:10.839258 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-nw2wn" podStartSLOduration=3.064490421 podStartE2EDuration="23.839236905s" podCreationTimestamp="2025-12-08 19:41:47 +0000 UTC" firstStartedPulling="2025-12-08 19:41:48.520639269 +0000 UTC m=+761.192745918" lastFinishedPulling="2025-12-08 19:42:09.295385753 +0000 UTC m=+781.967492402" observedRunningTime="2025-12-08 19:42:10.828631793 +0000 UTC m=+783.500738452" watchObservedRunningTime="2025-12-08 19:42:10.839236905 +0000 UTC m=+783.511343554" Dec 08 19:42:11 crc kubenswrapper[5120]: I1208 19:42:11.671513 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" path="/var/lib/kubelet/pods/afbb84f6-0831-4c68-9cfa-1fb9316bda93/volumes" Dec 08 19:42:12 crc kubenswrapper[5120]: I1208 19:42:12.766285 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rrg7" event={"ID":"1f2b07b0-6291-412a-b2ac-55ad46205d88","Type":"ContainerStarted","Data":"3a9566046e5f9b28f197ff424d3828c4a3898508b263af411b5d2b65d6fdd773"} Dec 08 19:42:12 crc kubenswrapper[5120]: I1208 19:42:12.773017 5120 generic.go:358] "Generic (PLEG): container finished" podID="289827c9-3e57-496d-92f8-07197e56ff6e" containerID="74c643b184f206046cfb820c38130c86b714f6420d5fef8793ace53f7975941d" exitCode=0 Dec 08 19:42:12 crc kubenswrapper[5120]: I1208 19:42:12.773423 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" event={"ID":"289827c9-3e57-496d-92f8-07197e56ff6e","Type":"ContainerDied","Data":"74c643b184f206046cfb820c38130c86b714f6420d5fef8793ace53f7975941d"} Dec 08 19:42:12 crc kubenswrapper[5120]: I1208 19:42:12.789418 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d54446746-xkvqs" podStartSLOduration=4.758892718 podStartE2EDuration="25.789404081s" podCreationTimestamp="2025-12-08 19:41:47 +0000 UTC" firstStartedPulling="2025-12-08 19:41:48.417728039 +0000 UTC m=+761.089834688" lastFinishedPulling="2025-12-08 19:42:09.448239402 +0000 UTC m=+782.120346051" observedRunningTime="2025-12-08 19:42:10.929942442 +0000 UTC m=+783.602049101" watchObservedRunningTime="2025-12-08 19:42:12.789404081 +0000 UTC m=+785.461510730" Dec 08 19:42:12 crc kubenswrapper[5120]: I1208 19:42:12.792700 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5rrg7" podStartSLOduration=6.01087524 podStartE2EDuration="21.792679344s" podCreationTimestamp="2025-12-08 19:41:51 +0000 UTC" firstStartedPulling="2025-12-08 19:41:53.606840697 +0000 UTC m=+766.278947346" lastFinishedPulling="2025-12-08 19:42:09.388644801 +0000 UTC m=+782.060751450" observedRunningTime="2025-12-08 19:42:12.785648344 +0000 UTC m=+785.457754993" watchObservedRunningTime="2025-12-08 19:42:12.792679344 +0000 UTC m=+785.464785993" Dec 08 19:42:13 crc kubenswrapper[5120]: I1208 19:42:13.787415 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" event={"ID":"10a54f49-d066-4a72-8ef6-e6ba70de9fec","Type":"ContainerStarted","Data":"4419b162ae5e8ed9d07642c768302f47311a9a0b891af8a74408a178d6e181c9"} Dec 08 19:42:13 crc kubenswrapper[5120]: I1208 19:42:13.809134 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-784954fb9d-dbmj9" podStartSLOduration=10.193975149 podStartE2EDuration="13.80911423s" podCreationTimestamp="2025-12-08 19:42:00 +0000 UTC" firstStartedPulling="2025-12-08 19:42:09.685130027 +0000 UTC m=+782.357236676" lastFinishedPulling="2025-12-08 19:42:13.300269108 +0000 UTC m=+785.972375757" observedRunningTime="2025-12-08 19:42:13.806325333 +0000 UTC m=+786.478431992" watchObservedRunningTime="2025-12-08 19:42:13.80911423 +0000 UTC m=+786.481220889" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.090651 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.227595 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-bundle\") pod \"289827c9-3e57-496d-92f8-07197e56ff6e\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.227814 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-util\") pod \"289827c9-3e57-496d-92f8-07197e56ff6e\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.227887 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64pg2\" (UniqueName: \"kubernetes.io/projected/289827c9-3e57-496d-92f8-07197e56ff6e-kube-api-access-64pg2\") pod \"289827c9-3e57-496d-92f8-07197e56ff6e\" (UID: \"289827c9-3e57-496d-92f8-07197e56ff6e\") " Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.228719 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-bundle" (OuterVolumeSpecName: "bundle") pod "289827c9-3e57-496d-92f8-07197e56ff6e" (UID: "289827c9-3e57-496d-92f8-07197e56ff6e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.246105 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-util" (OuterVolumeSpecName: "util") pod "289827c9-3e57-496d-92f8-07197e56ff6e" (UID: "289827c9-3e57-496d-92f8-07197e56ff6e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.255296 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289827c9-3e57-496d-92f8-07197e56ff6e-kube-api-access-64pg2" (OuterVolumeSpecName: "kube-api-access-64pg2") pod "289827c9-3e57-496d-92f8-07197e56ff6e" (UID: "289827c9-3e57-496d-92f8-07197e56ff6e"). InnerVolumeSpecName "kube-api-access-64pg2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.342459 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.342498 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-64pg2\" (UniqueName: \"kubernetes.io/projected/289827c9-3e57-496d-92f8-07197e56ff6e-kube-api-access-64pg2\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.342511 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/289827c9-3e57-496d-92f8-07197e56ff6e-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.520924 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521587 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="289827c9-3e57-496d-92f8-07197e56ff6e" containerName="pull" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521607 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="289827c9-3e57-496d-92f8-07197e56ff6e" containerName="pull" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521621 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="289827c9-3e57-496d-92f8-07197e56ff6e" containerName="util" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521627 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="289827c9-3e57-496d-92f8-07197e56ff6e" containerName="util" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521636 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="extract-content" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521642 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="extract-content" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521651 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="289827c9-3e57-496d-92f8-07197e56ff6e" containerName="extract" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521656 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="289827c9-3e57-496d-92f8-07197e56ff6e" containerName="extract" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521673 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="extract-utilities" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521679 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="extract-utilities" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521687 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="registry-server" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521692 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="registry-server" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521775 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="afbb84f6-0831-4c68-9cfa-1fb9316bda93" containerName="registry-server" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.521787 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="289827c9-3e57-496d-92f8-07197e56ff6e" containerName="extract" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.530716 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.535013 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.535370 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.535561 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.535847 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.536008 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.536259 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-pcp2q\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.536343 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.536606 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.538300 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.542715 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646242 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646305 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646346 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646385 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646413 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646443 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646468 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646497 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1195b10b-916d-4648-b046-15746d70afa5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646519 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646553 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646584 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646611 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646636 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646660 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.646675 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.748126 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.748267 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.749252 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.749422 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.749802 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750307 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750339 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750489 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750531 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750619 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750693 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750734 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750773 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750788 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750805 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750849 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1195b10b-916d-4648-b046-15746d70afa5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.750874 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.751378 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.752058 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.752487 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.753784 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.754141 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.754545 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.755337 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.759409 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.760014 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.760112 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.762540 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/1195b10b-916d-4648-b046-15746d70afa5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.768790 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.768926 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/1195b10b-916d-4648-b046-15746d70afa5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"1195b10b-916d-4648-b046-15746d70afa5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.799048 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" event={"ID":"289827c9-3e57-496d-92f8-07197e56ff6e","Type":"ContainerDied","Data":"3b74b9552219d7cc63f9f079537fbcadfbb577a5f70eb92a31eb192add182383"} Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.799088 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avzj4b" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.799105 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b74b9552219d7cc63f9f079537fbcadfbb577a5f70eb92a31eb192add182383" Dec 08 19:42:14 crc kubenswrapper[5120]: I1208 19:42:14.849858 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:15 crc kubenswrapper[5120]: I1208 19:42:15.455645 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" podUID="b1750a48-cdf8-4fc3-b3c1-4577527c256b" containerName="registry" containerID="cri-o://272123ebf619ab54a9cef4678b4e6074c5e4b50e04f9b434c39812e3c14873e5" gracePeriod=30 Dec 08 19:42:15 crc kubenswrapper[5120]: I1208 19:42:15.894596 5120 patch_prober.go:28] interesting pod/image-registry-66587d64c8-cdcv9 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.8:5000/healthz\": dial tcp 10.217.0.8:5000: connect: connection refused" start-of-body= Dec 08 19:42:15 crc kubenswrapper[5120]: I1208 19:42:15.894686 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" podUID="b1750a48-cdf8-4fc3-b3c1-4577527c256b" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.8:5000/healthz\": dial tcp 10.217.0.8:5000: connect: connection refused" Dec 08 19:42:16 crc kubenswrapper[5120]: I1208 19:42:16.821823 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" event={"ID":"b1750a48-cdf8-4fc3-b3c1-4577527c256b","Type":"ContainerDied","Data":"272123ebf619ab54a9cef4678b4e6074c5e4b50e04f9b434c39812e3c14873e5"} Dec 08 19:42:16 crc kubenswrapper[5120]: I1208 19:42:16.821845 5120 generic.go:358] "Generic (PLEG): container finished" podID="b1750a48-cdf8-4fc3-b3c1-4577527c256b" containerID="272123ebf619ab54a9cef4678b4e6074c5e4b50e04f9b434c39812e3c14873e5" exitCode=0 Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.368542 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441487 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-certificates\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441658 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441721 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1750a48-cdf8-4fc3-b3c1-4577527c256b-installation-pull-secrets\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441746 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-bound-sa-token\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441768 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1750a48-cdf8-4fc3-b3c1-4577527c256b-ca-trust-extracted\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441801 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzsbn\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-kube-api-access-wzsbn\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441964 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-tls\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.441982 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-trusted-ca\") pod \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\" (UID: \"b1750a48-cdf8-4fc3-b3c1-4577527c256b\") " Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.442367 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.442598 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.448977 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.451162 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-kube-api-access-wzsbn" (OuterVolumeSpecName: "kube-api-access-wzsbn") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "kube-api-access-wzsbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.452411 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.469996 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.476264 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1750a48-cdf8-4fc3-b3c1-4577527c256b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.476302 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1750a48-cdf8-4fc3-b3c1-4577527c256b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b1750a48-cdf8-4fc3-b3c1-4577527c256b" (UID: "b1750a48-cdf8-4fc3-b3c1-4577527c256b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.542980 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.543011 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.543020 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b1750a48-cdf8-4fc3-b3c1-4577527c256b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.543044 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b1750a48-cdf8-4fc3-b3c1-4577527c256b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.543053 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.543060 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b1750a48-cdf8-4fc3-b3c1-4577527c256b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.543068 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzsbn\" (UniqueName: \"kubernetes.io/projected/b1750a48-cdf8-4fc3-b3c1-4577527c256b-kube-api-access-wzsbn\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:19 crc kubenswrapper[5120]: E1208 19:42:19.786040 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1750a48_cdf8_4fc3_b3c1_4577527c256b.slice/crio-a06f44e52780f3e67bc2c999c0a146b3eca361b420d2400d06742348eb701698\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1750a48_cdf8_4fc3_b3c1_4577527c256b.slice\": RecentStats: unable to find data in memory cache]" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.834248 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:42:19 crc kubenswrapper[5120]: W1208 19:42:19.835995 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1195b10b_916d_4648_b046_15746d70afa5.slice/crio-11a1d0f055147f0a287f5a7f03a92d8c5d9448971437c3d5ab9c94cd1a328845 WatchSource:0}: Error finding container 11a1d0f055147f0a287f5a7f03a92d8c5d9448971437c3d5ab9c94cd1a328845: Status 404 returned error can't find the container with id 11a1d0f055147f0a287f5a7f03a92d8c5d9448971437c3d5ab9c94cd1a328845 Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.841543 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" event={"ID":"0517e6a1-4802-446c-865d-da4935151902","Type":"ContainerStarted","Data":"f7910f471abf27356ecd3cd7cf5de40941620b8b8f03589cfc9db444f892250a"} Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.843547 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.843565 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-cdcv9" event={"ID":"b1750a48-cdf8-4fc3-b3c1-4577527c256b","Type":"ContainerDied","Data":"a06f44e52780f3e67bc2c999c0a146b3eca361b420d2400d06742348eb701698"} Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.843603 5120 scope.go:117] "RemoveContainer" containerID="272123ebf619ab54a9cef4678b4e6074c5e4b50e04f9b434c39812e3c14873e5" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.860511 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-c42kv" podStartSLOduration=13.340482859 podStartE2EDuration="22.860494124s" podCreationTimestamp="2025-12-08 19:41:57 +0000 UTC" firstStartedPulling="2025-12-08 19:42:09.877243558 +0000 UTC m=+782.549350197" lastFinishedPulling="2025-12-08 19:42:19.397254803 +0000 UTC m=+792.069361462" observedRunningTime="2025-12-08 19:42:19.856928603 +0000 UTC m=+792.529035262" watchObservedRunningTime="2025-12-08 19:42:19.860494124 +0000 UTC m=+792.532600773" Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.877180 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cdcv9"] Dec 08 19:42:19 crc kubenswrapper[5120]: I1208 19:42:19.882579 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-cdcv9"] Dec 08 19:42:20 crc kubenswrapper[5120]: I1208 19:42:20.856891 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1195b10b-916d-4648-b046-15746d70afa5","Type":"ContainerStarted","Data":"11a1d0f055147f0a287f5a7f03a92d8c5d9448971437c3d5ab9c94cd1a328845"} Dec 08 19:42:21 crc kubenswrapper[5120]: I1208 19:42:21.667321 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1750a48-cdf8-4fc3-b3c1-4577527c256b" path="/var/lib/kubelet/pods/b1750a48-cdf8-4fc3-b3c1-4577527c256b/volumes" Dec 08 19:42:21 crc kubenswrapper[5120]: I1208 19:42:21.763330 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-tzkn2" Dec 08 19:42:21 crc kubenswrapper[5120]: I1208 19:42:21.780800 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:42:21 crc kubenswrapper[5120]: I1208 19:42:21.780863 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:42:21 crc kubenswrapper[5120]: I1208 19:42:21.833707 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:42:21 crc kubenswrapper[5120]: I1208 19:42:21.916676 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:42:23 crc kubenswrapper[5120]: I1208 19:42:23.035065 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:42:23 crc kubenswrapper[5120]: I1208 19:42:23.035133 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:42:23 crc kubenswrapper[5120]: I1208 19:42:23.939852 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rrg7"] Dec 08 19:42:23 crc kubenswrapper[5120]: I1208 19:42:23.940447 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5rrg7" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="registry-server" containerID="cri-o://3a9566046e5f9b28f197ff424d3828c4a3898508b263af411b5d2b65d6fdd773" gracePeriod=2 Dec 08 19:42:25 crc kubenswrapper[5120]: I1208 19:42:25.888141 5120 generic.go:358] "Generic (PLEG): container finished" podID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerID="3a9566046e5f9b28f197ff424d3828c4a3898508b263af411b5d2b65d6fdd773" exitCode=0 Dec 08 19:42:25 crc kubenswrapper[5120]: I1208 19:42:25.888194 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rrg7" event={"ID":"1f2b07b0-6291-412a-b2ac-55ad46205d88","Type":"ContainerDied","Data":"3a9566046e5f9b28f197ff424d3828c4a3898508b263af411b5d2b65d6fdd773"} Dec 08 19:42:26 crc kubenswrapper[5120]: I1208 19:42:26.662623 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw"] Dec 08 19:42:26 crc kubenswrapper[5120]: I1208 19:42:26.663405 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1750a48-cdf8-4fc3-b3c1-4577527c256b" containerName="registry" Dec 08 19:42:26 crc kubenswrapper[5120]: I1208 19:42:26.663425 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1750a48-cdf8-4fc3-b3c1-4577527c256b" containerName="registry" Dec 08 19:42:26 crc kubenswrapper[5120]: I1208 19:42:26.663559 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b1750a48-cdf8-4fc3-b3c1-4577527c256b" containerName="registry" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.209771 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw"] Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.209961 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.212737 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-2tz5p\"" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.212835 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.212882 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.345790 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af7c44b-625b-4c54-909c-7e7cbf2adf30-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-mvctw\" (UID: \"9af7c44b-625b-4c54-909c-7e7cbf2adf30\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.345879 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64twp\" (UniqueName: \"kubernetes.io/projected/9af7c44b-625b-4c54-909c-7e7cbf2adf30-kube-api-access-64twp\") pod \"cert-manager-operator-controller-manager-64c74584c4-mvctw\" (UID: \"9af7c44b-625b-4c54-909c-7e7cbf2adf30\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.447437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-64twp\" (UniqueName: \"kubernetes.io/projected/9af7c44b-625b-4c54-909c-7e7cbf2adf30-kube-api-access-64twp\") pod \"cert-manager-operator-controller-manager-64c74584c4-mvctw\" (UID: \"9af7c44b-625b-4c54-909c-7e7cbf2adf30\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.447523 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af7c44b-625b-4c54-909c-7e7cbf2adf30-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-mvctw\" (UID: \"9af7c44b-625b-4c54-909c-7e7cbf2adf30\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.448207 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9af7c44b-625b-4c54-909c-7e7cbf2adf30-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-mvctw\" (UID: \"9af7c44b-625b-4c54-909c-7e7cbf2adf30\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.467958 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-64twp\" (UniqueName: \"kubernetes.io/projected/9af7c44b-625b-4c54-909c-7e7cbf2adf30-kube-api-access-64twp\") pod \"cert-manager-operator-controller-manager-64c74584c4-mvctw\" (UID: \"9af7c44b-625b-4c54-909c-7e7cbf2adf30\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:27 crc kubenswrapper[5120]: I1208 19:42:27.524886 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.458868 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.563918 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-utilities\") pod \"1f2b07b0-6291-412a-b2ac-55ad46205d88\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.564029 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-catalog-content\") pod \"1f2b07b0-6291-412a-b2ac-55ad46205d88\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.564133 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm467\" (UniqueName: \"kubernetes.io/projected/1f2b07b0-6291-412a-b2ac-55ad46205d88-kube-api-access-zm467\") pod \"1f2b07b0-6291-412a-b2ac-55ad46205d88\" (UID: \"1f2b07b0-6291-412a-b2ac-55ad46205d88\") " Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.564966 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-utilities" (OuterVolumeSpecName: "utilities") pod "1f2b07b0-6291-412a-b2ac-55ad46205d88" (UID: "1f2b07b0-6291-412a-b2ac-55ad46205d88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.572336 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2b07b0-6291-412a-b2ac-55ad46205d88-kube-api-access-zm467" (OuterVolumeSpecName: "kube-api-access-zm467") pod "1f2b07b0-6291-412a-b2ac-55ad46205d88" (UID: "1f2b07b0-6291-412a-b2ac-55ad46205d88"). InnerVolumeSpecName "kube-api-access-zm467". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.628777 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f2b07b0-6291-412a-b2ac-55ad46205d88" (UID: "1f2b07b0-6291-412a-b2ac-55ad46205d88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.665269 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zm467\" (UniqueName: \"kubernetes.io/projected/1f2b07b0-6291-412a-b2ac-55ad46205d88-kube-api-access-zm467\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.665304 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.665319 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2b07b0-6291-412a-b2ac-55ad46205d88-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.912417 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rrg7" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.912469 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rrg7" event={"ID":"1f2b07b0-6291-412a-b2ac-55ad46205d88","Type":"ContainerDied","Data":"548f51e1a405e255b2de68e829024f119d23a58eef897d9d2d10637a3e91a1cb"} Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.912537 5120 scope.go:117] "RemoveContainer" containerID="3a9566046e5f9b28f197ff424d3828c4a3898508b263af411b5d2b65d6fdd773" Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.962821 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rrg7"] Dec 08 19:42:28 crc kubenswrapper[5120]: I1208 19:42:28.966366 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5rrg7"] Dec 08 19:42:29 crc kubenswrapper[5120]: I1208 19:42:29.668933 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" path="/var/lib/kubelet/pods/1f2b07b0-6291-412a-b2ac-55ad46205d88/volumes" Dec 08 19:42:33 crc kubenswrapper[5120]: I1208 19:42:33.553729 5120 scope.go:117] "RemoveContainer" containerID="67c6810ce2596de6f39be0db09db5bab2ba92bccf732b939ef3e4cbc8dc51e8f" Dec 08 19:42:33 crc kubenswrapper[5120]: I1208 19:42:33.796564 5120 scope.go:117] "RemoveContainer" containerID="4361e84f5657e4b0ea4dac64652c8c11f253d2e792b09515122c2a9bfbc6c301" Dec 08 19:42:33 crc kubenswrapper[5120]: I1208 19:42:33.961346 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw"] Dec 08 19:42:33 crc kubenswrapper[5120]: W1208 19:42:33.963338 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9af7c44b_625b_4c54_909c_7e7cbf2adf30.slice/crio-430af15e57dff9273c64fbb67fa6153fb520a009b681c5f5c265f67f40d48161 WatchSource:0}: Error finding container 430af15e57dff9273c64fbb67fa6153fb520a009b681c5f5c265f67f40d48161: Status 404 returned error can't find the container with id 430af15e57dff9273c64fbb67fa6153fb520a009b681c5f5c265f67f40d48161 Dec 08 19:42:34 crc kubenswrapper[5120]: I1208 19:42:34.045753 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" event={"ID":"9af7c44b-625b-4c54-909c-7e7cbf2adf30","Type":"ContainerStarted","Data":"430af15e57dff9273c64fbb67fa6153fb520a009b681c5f5c265f67f40d48161"} Dec 08 19:42:35 crc kubenswrapper[5120]: I1208 19:42:35.054880 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1195b10b-916d-4648-b046-15746d70afa5","Type":"ContainerStarted","Data":"c3563f2da68b996e6b49a966b7ed2dd68db6ce53ac81754a30dbc6b4fdc3db41"} Dec 08 19:42:35 crc kubenswrapper[5120]: I1208 19:42:35.158612 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:42:35 crc kubenswrapper[5120]: I1208 19:42:35.190509 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:42:36 crc kubenswrapper[5120]: I1208 19:42:36.062762 5120 generic.go:358] "Generic (PLEG): container finished" podID="1195b10b-916d-4648-b046-15746d70afa5" containerID="c3563f2da68b996e6b49a966b7ed2dd68db6ce53ac81754a30dbc6b4fdc3db41" exitCode=0 Dec 08 19:42:36 crc kubenswrapper[5120]: I1208 19:42:36.062825 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1195b10b-916d-4648-b046-15746d70afa5","Type":"ContainerDied","Data":"c3563f2da68b996e6b49a966b7ed2dd68db6ce53ac81754a30dbc6b4fdc3db41"} Dec 08 19:42:37 crc kubenswrapper[5120]: I1208 19:42:37.071688 5120 generic.go:358] "Generic (PLEG): container finished" podID="1195b10b-916d-4648-b046-15746d70afa5" containerID="1b08801c6f2c350c9c0a90d22cbd1c2a98adc41ab4abe90354c9b78397a2cd13" exitCode=0 Dec 08 19:42:37 crc kubenswrapper[5120]: I1208 19:42:37.071882 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1195b10b-916d-4648-b046-15746d70afa5","Type":"ContainerDied","Data":"1b08801c6f2c350c9c0a90d22cbd1c2a98adc41ab4abe90354c9b78397a2cd13"} Dec 08 19:42:37 crc kubenswrapper[5120]: I1208 19:42:37.074412 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" event={"ID":"9af7c44b-625b-4c54-909c-7e7cbf2adf30","Type":"ContainerStarted","Data":"80bac4db737b2ceac86511390de17d5dc8121528f86065bde37035ba6634e9bb"} Dec 08 19:42:37 crc kubenswrapper[5120]: I1208 19:42:37.137155 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-mvctw" podStartSLOduration=8.935222082 podStartE2EDuration="11.137136411s" podCreationTimestamp="2025-12-08 19:42:26 +0000 UTC" firstStartedPulling="2025-12-08 19:42:33.966551426 +0000 UTC m=+806.638658075" lastFinishedPulling="2025-12-08 19:42:36.168465755 +0000 UTC m=+808.840572404" observedRunningTime="2025-12-08 19:42:37.131855186 +0000 UTC m=+809.803961835" watchObservedRunningTime="2025-12-08 19:42:37.137136411 +0000 UTC m=+809.809243050" Dec 08 19:42:38 crc kubenswrapper[5120]: I1208 19:42:38.081511 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"1195b10b-916d-4648-b046-15746d70afa5","Type":"ContainerStarted","Data":"226ec9980e65b1cbeec26d642541482b7efbaa8b42c5e42cb8d59f66bd5e015d"} Dec 08 19:42:38 crc kubenswrapper[5120]: I1208 19:42:38.082020 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:38 crc kubenswrapper[5120]: I1208 19:42:38.125479 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=10.030320966 podStartE2EDuration="24.125463065s" podCreationTimestamp="2025-12-08 19:42:14 +0000 UTC" firstStartedPulling="2025-12-08 19:42:19.837593696 +0000 UTC m=+792.509700345" lastFinishedPulling="2025-12-08 19:42:33.932735795 +0000 UTC m=+806.604842444" observedRunningTime="2025-12-08 19:42:38.122606525 +0000 UTC m=+810.794713194" watchObservedRunningTime="2025-12-08 19:42:38.125463065 +0000 UTC m=+810.797569714" Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.977113 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv"] Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.978225 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="extract-content" Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.978243 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="extract-content" Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.978258 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="extract-utilities" Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.978266 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="extract-utilities" Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.978278 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="registry-server" Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.978287 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="registry-server" Dec 08 19:42:40 crc kubenswrapper[5120]: I1208 19:42:40.978434 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f2b07b0-6291-412a-b2ac-55ad46205d88" containerName="registry-server" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.136401 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv"] Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.136558 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.138701 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.139069 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.140387 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-6lrnj\"" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.246197 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/93ff168c-ab18-4295-857f-4d32f54b55fa-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-whfnv\" (UID: \"93ff168c-ab18-4295-857f-4d32f54b55fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.246251 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg79p\" (UniqueName: \"kubernetes.io/projected/93ff168c-ab18-4295-857f-4d32f54b55fa-kube-api-access-mg79p\") pod \"cert-manager-cainjector-7dbf76d5c8-whfnv\" (UID: \"93ff168c-ab18-4295-857f-4d32f54b55fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.348325 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/93ff168c-ab18-4295-857f-4d32f54b55fa-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-whfnv\" (UID: \"93ff168c-ab18-4295-857f-4d32f54b55fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.348486 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mg79p\" (UniqueName: \"kubernetes.io/projected/93ff168c-ab18-4295-857f-4d32f54b55fa-kube-api-access-mg79p\") pod \"cert-manager-cainjector-7dbf76d5c8-whfnv\" (UID: \"93ff168c-ab18-4295-857f-4d32f54b55fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.382935 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg79p\" (UniqueName: \"kubernetes.io/projected/93ff168c-ab18-4295-857f-4d32f54b55fa-kube-api-access-mg79p\") pod \"cert-manager-cainjector-7dbf76d5c8-whfnv\" (UID: \"93ff168c-ab18-4295-857f-4d32f54b55fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.387061 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/93ff168c-ab18-4295-857f-4d32f54b55fa-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-whfnv\" (UID: \"93ff168c-ab18-4295-857f-4d32f54b55fa\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.462076 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" Dec 08 19:42:41 crc kubenswrapper[5120]: I1208 19:42:41.688104 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv"] Dec 08 19:42:41 crc kubenswrapper[5120]: W1208 19:42:41.699091 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93ff168c_ab18_4295_857f_4d32f54b55fa.slice/crio-d885793f09f8b91d63f22f78ddebb9655b55c851cdd95f86c2708bf85099cd04 WatchSource:0}: Error finding container d885793f09f8b91d63f22f78ddebb9655b55c851cdd95f86c2708bf85099cd04: Status 404 returned error can't find the container with id d885793f09f8b91d63f22f78ddebb9655b55c851cdd95f86c2708bf85099cd04 Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.103914 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" event={"ID":"93ff168c-ab18-4295-857f-4d32f54b55fa","Type":"ContainerStarted","Data":"d885793f09f8b91d63f22f78ddebb9655b55c851cdd95f86c2708bf85099cd04"} Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.222143 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-24h4d"] Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.230112 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.232973 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-k7fms\"" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.244086 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-24h4d"] Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.364784 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db34582c-22dd-43ca-8400-711d9870260e-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-24h4d\" (UID: \"db34582c-22dd-43ca-8400-711d9870260e\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.364899 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48qn\" (UniqueName: \"kubernetes.io/projected/db34582c-22dd-43ca-8400-711d9870260e-kube-api-access-f48qn\") pod \"cert-manager-webhook-7894b5b9b4-24h4d\" (UID: \"db34582c-22dd-43ca-8400-711d9870260e\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.466107 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f48qn\" (UniqueName: \"kubernetes.io/projected/db34582c-22dd-43ca-8400-711d9870260e-kube-api-access-f48qn\") pod \"cert-manager-webhook-7894b5b9b4-24h4d\" (UID: \"db34582c-22dd-43ca-8400-711d9870260e\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.466287 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db34582c-22dd-43ca-8400-711d9870260e-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-24h4d\" (UID: \"db34582c-22dd-43ca-8400-711d9870260e\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.487477 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db34582c-22dd-43ca-8400-711d9870260e-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-24h4d\" (UID: \"db34582c-22dd-43ca-8400-711d9870260e\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.487620 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48qn\" (UniqueName: \"kubernetes.io/projected/db34582c-22dd-43ca-8400-711d9870260e-kube-api-access-f48qn\") pod \"cert-manager-webhook-7894b5b9b4-24h4d\" (UID: \"db34582c-22dd-43ca-8400-711d9870260e\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.544437 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:42 crc kubenswrapper[5120]: I1208 19:42:42.791361 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-24h4d"] Dec 08 19:42:43 crc kubenswrapper[5120]: I1208 19:42:43.111098 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" event={"ID":"db34582c-22dd-43ca-8400-711d9870260e","Type":"ContainerStarted","Data":"054f41308863ff64a349a1ad755d7e6cf4508a7411358c3ad27c8fa3396685af"} Dec 08 19:42:44 crc kubenswrapper[5120]: I1208 19:42:44.967814 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:42:44 crc kubenswrapper[5120]: I1208 19:42:44.992575 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:44 crc kubenswrapper[5120]: I1208 19:42:44.992824 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:42:44 crc kubenswrapper[5120]: I1208 19:42:44.995332 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 08 19:42:44 crc kubenswrapper[5120]: I1208 19:42:44.995361 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-gf97b\"" Dec 08 19:42:44 crc kubenswrapper[5120]: I1208 19:42:44.995413 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 08 19:42:44 crc kubenswrapper[5120]: I1208 19:42:44.995619 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122226 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122272 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122352 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122416 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122488 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122514 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6568r\" (UniqueName: \"kubernetes.io/projected/933147f5-d93c-4873-918d-b597df00225e-kube-api-access-6568r\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122646 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122715 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122763 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122872 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.122920 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.224771 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225110 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225139 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6568r\" (UniqueName: \"kubernetes.io/projected/933147f5-d93c-4873-918d-b597df00225e-kube-api-access-6568r\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225270 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225351 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225188 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225613 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225641 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.225922 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226577 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226628 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226646 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226734 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226795 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226863 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.226886 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.227124 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.227181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.227230 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.227939 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.233651 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.235042 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.243348 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6568r\" (UniqueName: \"kubernetes.io/projected/933147f5-d93c-4873-918d-b597df00225e-kube-api-access-6568r\") pod \"service-telemetry-operator-1-build\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:45 crc kubenswrapper[5120]: I1208 19:42:45.312663 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:42:48 crc kubenswrapper[5120]: I1208 19:42:48.443982 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:42:49 crc kubenswrapper[5120]: I1208 19:42:49.164741 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"933147f5-d93c-4873-918d-b597df00225e","Type":"ContainerStarted","Data":"3a3ac229d4705e622ce5d091905b8b9b37f8959bf2af01b5bd2729d1e15fa5d4"} Dec 08 19:42:49 crc kubenswrapper[5120]: I1208 19:42:49.217907 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="1195b10b-916d-4648-b046-15746d70afa5" containerName="elasticsearch" probeResult="failure" output=< Dec 08 19:42:49 crc kubenswrapper[5120]: {"timestamp": "2025-12-08T19:42:49+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 19:42:49 crc kubenswrapper[5120]: > Dec 08 19:42:50 crc kubenswrapper[5120]: I1208 19:42:50.171802 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" event={"ID":"db34582c-22dd-43ca-8400-711d9870260e","Type":"ContainerStarted","Data":"bf6b0e609975eb42796f84e776b49f9a3d114e6ea9c1e6c7c467b0b7d6c91a2e"} Dec 08 19:42:50 crc kubenswrapper[5120]: I1208 19:42:50.172356 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:50 crc kubenswrapper[5120]: I1208 19:42:50.179848 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" event={"ID":"93ff168c-ab18-4295-857f-4d32f54b55fa","Type":"ContainerStarted","Data":"d490f724924e5e5eda5206b6d6179d95f7666bb8b30de0e97960a92da7d65eb0"} Dec 08 19:42:50 crc kubenswrapper[5120]: I1208 19:42:50.193031 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" podStartSLOduration=1.4614733229999999 podStartE2EDuration="8.193007348s" podCreationTimestamp="2025-12-08 19:42:42 +0000 UTC" firstStartedPulling="2025-12-08 19:42:42.803129908 +0000 UTC m=+815.475236557" lastFinishedPulling="2025-12-08 19:42:49.534663923 +0000 UTC m=+822.206770582" observedRunningTime="2025-12-08 19:42:50.186021559 +0000 UTC m=+822.858128218" watchObservedRunningTime="2025-12-08 19:42:50.193007348 +0000 UTC m=+822.865114007" Dec 08 19:42:50 crc kubenswrapper[5120]: I1208 19:42:50.207072 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-whfnv" podStartSLOduration=2.412722993 podStartE2EDuration="10.207046029s" podCreationTimestamp="2025-12-08 19:42:40 +0000 UTC" firstStartedPulling="2025-12-08 19:42:41.70220109 +0000 UTC m=+814.374307739" lastFinishedPulling="2025-12-08 19:42:49.496524086 +0000 UTC m=+822.168630775" observedRunningTime="2025-12-08 19:42:50.206314986 +0000 UTC m=+822.878421635" watchObservedRunningTime="2025-12-08 19:42:50.207046029 +0000 UTC m=+822.879152688" Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.034653 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.034975 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.035033 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.035633 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0013a9518e030fb621458e94cf0445454fda4310af57dd86ad44617260dc5da5"} pod="openshift-machine-config-operator/machine-config-daemon-5j87q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.035686 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" containerID="cri-o://0013a9518e030fb621458e94cf0445454fda4310af57dd86ad44617260dc5da5" gracePeriod=600 Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.197818 5120 generic.go:358] "Generic (PLEG): container finished" podID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerID="0013a9518e030fb621458e94cf0445454fda4310af57dd86ad44617260dc5da5" exitCode=0 Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.197912 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerDied","Data":"0013a9518e030fb621458e94cf0445454fda4310af57dd86ad44617260dc5da5"} Dec 08 19:42:53 crc kubenswrapper[5120]: I1208 19:42:53.197985 5120 scope.go:117] "RemoveContainer" containerID="6590f4c156683ee0aa9329f55a6aa9a953f227291c1143ac0b524dd8886082c5" Dec 08 19:42:54 crc kubenswrapper[5120]: I1208 19:42:54.839994 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:42:55 crc kubenswrapper[5120]: I1208 19:42:55.109148 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:42:55 crc kubenswrapper[5120]: I1208 19:42:55.210439 5120 generic.go:358] "Generic (PLEG): container finished" podID="933147f5-d93c-4873-918d-b597df00225e" containerID="87d90d3674550a3120efa493663d6d18eb3b3bd3866f361c30bb512894756969" exitCode=0 Dec 08 19:42:55 crc kubenswrapper[5120]: I1208 19:42:55.210612 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"933147f5-d93c-4873-918d-b597df00225e","Type":"ContainerDied","Data":"87d90d3674550a3120efa493663d6d18eb3b3bd3866f361c30bb512894756969"} Dec 08 19:42:55 crc kubenswrapper[5120]: I1208 19:42:55.213241 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"30d8ca69d90945bbdca3222165929da632dddcd95ec26a74ce2884c7fac2c88c"} Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.188737 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-24h4d" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.224220 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="933147f5-d93c-4873-918d-b597df00225e" containerName="docker-build" containerID="cri-o://92487acb908dc8a19011dbca303301a39e40b2d7ee2bd098e5695ca211c151e5" gracePeriod=30 Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.224264 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"933147f5-d93c-4873-918d-b597df00225e","Type":"ContainerStarted","Data":"92487acb908dc8a19011dbca303301a39e40b2d7ee2bd098e5695ca211c151e5"} Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.250943 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=7.334610604 podStartE2EDuration="12.250929829s" podCreationTimestamp="2025-12-08 19:42:44 +0000 UTC" firstStartedPulling="2025-12-08 19:42:49.495143852 +0000 UTC m=+822.167250541" lastFinishedPulling="2025-12-08 19:42:54.411463107 +0000 UTC m=+827.083569766" observedRunningTime="2025-12-08 19:42:56.246932973 +0000 UTC m=+828.919039622" watchObservedRunningTime="2025-12-08 19:42:56.250929829 +0000 UTC m=+828.923036478" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.769629 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.839921 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.840054 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.842920 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.843117 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.843731 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.898711 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.898784 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.898882 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.898941 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.898981 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.899010 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.899026 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.899055 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.899079 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.899211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.899277 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:56 crc kubenswrapper[5120]: I1208 19:42:56.899327 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g64v\" (UniqueName: \"kubernetes.io/projected/4ce43550-9c31-4097-8c9a-842ad87dec95-kube-api-access-6g64v\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000054 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000111 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000156 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000211 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6g64v\" (UniqueName: \"kubernetes.io/projected/4ce43550-9c31-4097-8c9a-842ad87dec95-kube-api-access-6g64v\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000256 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000304 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000335 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000535 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000554 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000568 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000606 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000609 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000632 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000659 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000667 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.000868 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.001088 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.001116 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.001275 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.003895 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.006036 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.008615 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.024235 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g64v\" (UniqueName: \"kubernetes.io/projected/4ce43550-9c31-4097-8c9a-842ad87dec95-kube-api-access-6g64v\") pod \"service-telemetry-operator-2-build\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:57 crc kubenswrapper[5120]: I1208 19:42:57.157034 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.042747 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-bbwnw"] Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.192367 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-bbwnw"] Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.192551 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.195393 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-kz5mx\"" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.319598 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx82z\" (UniqueName: \"kubernetes.io/projected/229e33bb-6b88-4417-9b49-a59abecc8c43-kube-api-access-vx82z\") pod \"cert-manager-858d87f86b-bbwnw\" (UID: \"229e33bb-6b88-4417-9b49-a59abecc8c43\") " pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.320101 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/229e33bb-6b88-4417-9b49-a59abecc8c43-bound-sa-token\") pod \"cert-manager-858d87f86b-bbwnw\" (UID: \"229e33bb-6b88-4417-9b49-a59abecc8c43\") " pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.421150 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/229e33bb-6b88-4417-9b49-a59abecc8c43-bound-sa-token\") pod \"cert-manager-858d87f86b-bbwnw\" (UID: \"229e33bb-6b88-4417-9b49-a59abecc8c43\") " pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.421297 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vx82z\" (UniqueName: \"kubernetes.io/projected/229e33bb-6b88-4417-9b49-a59abecc8c43-kube-api-access-vx82z\") pod \"cert-manager-858d87f86b-bbwnw\" (UID: \"229e33bb-6b88-4417-9b49-a59abecc8c43\") " pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.441102 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx82z\" (UniqueName: \"kubernetes.io/projected/229e33bb-6b88-4417-9b49-a59abecc8c43-kube-api-access-vx82z\") pod \"cert-manager-858d87f86b-bbwnw\" (UID: \"229e33bb-6b88-4417-9b49-a59abecc8c43\") " pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.441709 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/229e33bb-6b88-4417-9b49-a59abecc8c43-bound-sa-token\") pod \"cert-manager-858d87f86b-bbwnw\" (UID: \"229e33bb-6b88-4417-9b49-a59abecc8c43\") " pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:42:58 crc kubenswrapper[5120]: I1208 19:42:58.513870 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-bbwnw" Dec 08 19:43:03 crc kubenswrapper[5120]: I1208 19:43:03.986828 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_933147f5-d93c-4873-918d-b597df00225e/docker-build/0.log" Dec 08 19:43:03 crc kubenswrapper[5120]: I1208 19:43:03.987538 5120 generic.go:358] "Generic (PLEG): container finished" podID="933147f5-d93c-4873-918d-b597df00225e" containerID="92487acb908dc8a19011dbca303301a39e40b2d7ee2bd098e5695ca211c151e5" exitCode=-1 Dec 08 19:43:03 crc kubenswrapper[5120]: I1208 19:43:03.987693 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"933147f5-d93c-4873-918d-b597df00225e","Type":"ContainerDied","Data":"92487acb908dc8a19011dbca303301a39e40b2d7ee2bd098e5695ca211c151e5"} Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.714274 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_933147f5-d93c-4873-918d-b597df00225e/docker-build/0.log" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.715419 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.771028 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:43:04 crc kubenswrapper[5120]: W1208 19:43:04.772027 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ce43550_9c31_4097_8c9a_842ad87dec95.slice/crio-51b24a8ac72d449a8a72703ee826dc5de1d4fe325bddaa7736152803c3bc52a5 WatchSource:0}: Error finding container 51b24a8ac72d449a8a72703ee826dc5de1d4fe325bddaa7736152803c3bc52a5: Status 404 returned error can't find the container with id 51b24a8ac72d449a8a72703ee826dc5de1d4fe325bddaa7736152803c3bc52a5 Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830041 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-root\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830121 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-buildworkdir\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-run\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830299 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-proxy-ca-bundles\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830358 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-build-blob-cache\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830419 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-system-configs\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830444 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-push\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830589 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-pull\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830634 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-node-pullsecrets\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830680 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6568r\" (UniqueName: \"kubernetes.io/projected/933147f5-d93c-4873-918d-b597df00225e-kube-api-access-6568r\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830735 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-ca-bundles\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.830834 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-buildcachedir\") pod \"933147f5-d93c-4873-918d-b597df00225e\" (UID: \"933147f5-d93c-4873-918d-b597df00225e\") " Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.831485 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.832750 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.833340 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.833743 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.834406 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.834523 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.834766 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.835057 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.836331 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-bbwnw"] Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.836569 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.840338 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933147f5-d93c-4873-918d-b597df00225e-kube-api-access-6568r" (OuterVolumeSpecName: "kube-api-access-6568r") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "kube-api-access-6568r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.840379 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-push" (OuterVolumeSpecName: "builder-dockercfg-gf97b-push") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "builder-dockercfg-gf97b-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: W1208 19:43:04.840497 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229e33bb_6b88_4417_9b49_a59abecc8c43.slice/crio-171bf6324e91c586fb02769510ec3f082ca3653e864c57a0934af1206a25740c WatchSource:0}: Error finding container 171bf6324e91c586fb02769510ec3f082ca3653e864c57a0934af1206a25740c: Status 404 returned error can't find the container with id 171bf6324e91c586fb02769510ec3f082ca3653e864c57a0934af1206a25740c Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.840871 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-pull" (OuterVolumeSpecName: "builder-dockercfg-gf97b-pull") pod "933147f5-d93c-4873-918d-b597df00225e" (UID: "933147f5-d93c-4873-918d-b597df00225e"). InnerVolumeSpecName "builder-dockercfg-gf97b-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.932780 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933084 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933110 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933120 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933129 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933138 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/933147f5-d93c-4873-918d-b597df00225e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933147 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933156 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933177 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/933147f5-d93c-4873-918d-b597df00225e-builder-dockercfg-gf97b-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933186 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/933147f5-d93c-4873-918d-b597df00225e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933195 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6568r\" (UniqueName: \"kubernetes.io/projected/933147f5-d93c-4873-918d-b597df00225e-kube-api-access-6568r\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.933204 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/933147f5-d93c-4873-918d-b597df00225e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.994447 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_933147f5-d93c-4873-918d-b597df00225e/docker-build/0.log" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.994860 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"933147f5-d93c-4873-918d-b597df00225e","Type":"ContainerDied","Data":"3a3ac229d4705e622ce5d091905b8b9b37f8959bf2af01b5bd2729d1e15fa5d4"} Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.994891 5120 scope.go:117] "RemoveContainer" containerID="92487acb908dc8a19011dbca303301a39e40b2d7ee2bd098e5695ca211c151e5" Dec 08 19:43:04 crc kubenswrapper[5120]: I1208 19:43:04.994997 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 19:43:05 crc kubenswrapper[5120]: I1208 19:43:05.002424 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4ce43550-9c31-4097-8c9a-842ad87dec95","Type":"ContainerStarted","Data":"51b24a8ac72d449a8a72703ee826dc5de1d4fe325bddaa7736152803c3bc52a5"} Dec 08 19:43:05 crc kubenswrapper[5120]: I1208 19:43:05.004493 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-bbwnw" event={"ID":"229e33bb-6b88-4417-9b49-a59abecc8c43","Type":"ContainerStarted","Data":"171bf6324e91c586fb02769510ec3f082ca3653e864c57a0934af1206a25740c"} Dec 08 19:43:05 crc kubenswrapper[5120]: I1208 19:43:05.029501 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:43:05 crc kubenswrapper[5120]: I1208 19:43:05.034212 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 19:43:05 crc kubenswrapper[5120]: I1208 19:43:05.039107 5120 scope.go:117] "RemoveContainer" containerID="87d90d3674550a3120efa493663d6d18eb3b3bd3866f361c30bb512894756969" Dec 08 19:43:05 crc kubenswrapper[5120]: I1208 19:43:05.672245 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933147f5-d93c-4873-918d-b597df00225e" path="/var/lib/kubelet/pods/933147f5-d93c-4873-918d-b597df00225e/volumes" Dec 08 19:43:06 crc kubenswrapper[5120]: I1208 19:43:06.015573 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4ce43550-9c31-4097-8c9a-842ad87dec95","Type":"ContainerStarted","Data":"0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0"} Dec 08 19:43:06 crc kubenswrapper[5120]: I1208 19:43:06.017762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-bbwnw" event={"ID":"229e33bb-6b88-4417-9b49-a59abecc8c43","Type":"ContainerStarted","Data":"f92071ee69124ca583270e334433267284cf65c8154ab399635bb0fedddee406"} Dec 08 19:43:06 crc kubenswrapper[5120]: I1208 19:43:06.103557 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-bbwnw" podStartSLOduration=8.103539968 podStartE2EDuration="8.103539968s" podCreationTimestamp="2025-12-08 19:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:43:06.097455167 +0000 UTC m=+838.769561836" watchObservedRunningTime="2025-12-08 19:43:06.103539968 +0000 UTC m=+838.775646627" Dec 08 19:43:06 crc kubenswrapper[5120]: I1208 19:43:06.105155 5120 ???:1] "http: TLS handshake error from 192.168.126.11:60240: no serving certificate available for the kubelet" Dec 08 19:43:07 crc kubenswrapper[5120]: I1208 19:43:07.139270 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.034486 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="4ce43550-9c31-4097-8c9a-842ad87dec95" containerName="git-clone" containerID="cri-o://0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0" gracePeriod=30 Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.459374 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_4ce43550-9c31-4097-8c9a-842ad87dec95/git-clone/0.log" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.459502 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589387 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-root\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589499 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-build-blob-cache\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589575 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-buildcachedir\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-system-configs\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589646 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-run\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589693 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-proxy-ca-bundles\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589778 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g64v\" (UniqueName: \"kubernetes.io/projected/4ce43550-9c31-4097-8c9a-842ad87dec95-kube-api-access-6g64v\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.589785 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590079 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590158 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590259 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-node-pullsecrets\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590327 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590372 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-buildworkdir\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590473 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-ca-bundles\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590525 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-pull\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590575 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-push\") pod \"4ce43550-9c31-4097-8c9a-842ad87dec95\" (UID: \"4ce43550-9c31-4097-8c9a-842ad87dec95\") " Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590574 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590693 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.590986 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591300 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591372 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591393 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591406 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591415 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591432 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4ce43550-9c31-4097-8c9a-842ad87dec95-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591441 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591450 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.591460 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4ce43550-9c31-4097-8c9a-842ad87dec95-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.599225 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-pull" (OuterVolumeSpecName: "builder-dockercfg-gf97b-pull") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "builder-dockercfg-gf97b-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.599235 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce43550-9c31-4097-8c9a-842ad87dec95-kube-api-access-6g64v" (OuterVolumeSpecName: "kube-api-access-6g64v") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "kube-api-access-6g64v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.599299 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-push" (OuterVolumeSpecName: "builder-dockercfg-gf97b-push") pod "4ce43550-9c31-4097-8c9a-842ad87dec95" (UID: "4ce43550-9c31-4097-8c9a-842ad87dec95"). InnerVolumeSpecName "builder-dockercfg-gf97b-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.693963 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ce43550-9c31-4097-8c9a-842ad87dec95-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.694455 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.694572 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/4ce43550-9c31-4097-8c9a-842ad87dec95-builder-dockercfg-gf97b-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:08 crc kubenswrapper[5120]: I1208 19:43:08.694589 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g64v\" (UniqueName: \"kubernetes.io/projected/4ce43550-9c31-4097-8c9a-842ad87dec95-kube-api-access-6g64v\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.043419 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_4ce43550-9c31-4097-8c9a-842ad87dec95/git-clone/0.log" Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.043812 5120 generic.go:358] "Generic (PLEG): container finished" podID="4ce43550-9c31-4097-8c9a-842ad87dec95" containerID="0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0" exitCode=1 Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.043929 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.043929 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4ce43550-9c31-4097-8c9a-842ad87dec95","Type":"ContainerDied","Data":"0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0"} Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.043984 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4ce43550-9c31-4097-8c9a-842ad87dec95","Type":"ContainerDied","Data":"51b24a8ac72d449a8a72703ee826dc5de1d4fe325bddaa7736152803c3bc52a5"} Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.044001 5120 scope.go:117] "RemoveContainer" containerID="0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0" Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.068472 5120 scope.go:117] "RemoveContainer" containerID="0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0" Dec 08 19:43:09 crc kubenswrapper[5120]: E1208 19:43:09.068975 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0\": container with ID starting with 0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0 not found: ID does not exist" containerID="0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0" Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.069038 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0"} err="failed to get container status \"0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0\": rpc error: code = NotFound desc = could not find container \"0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0\": container with ID starting with 0e26d987402f61808d6af7333b3d07fabe69e2e5009fba6afc069ba00cd2b8d0 not found: ID does not exist" Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.081366 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.087604 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 19:43:09 crc kubenswrapper[5120]: I1208 19:43:09.672619 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce43550-9c31-4097-8c9a-842ad87dec95" path="/var/lib/kubelet/pods/4ce43550-9c31-4097-8c9a-842ad87dec95/volumes" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.588602 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589645 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="933147f5-d93c-4873-918d-b597df00225e" containerName="manage-dockerfile" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589658 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="933147f5-d93c-4873-918d-b597df00225e" containerName="manage-dockerfile" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589671 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="933147f5-d93c-4873-918d-b597df00225e" containerName="docker-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589677 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="933147f5-d93c-4873-918d-b597df00225e" containerName="docker-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589700 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ce43550-9c31-4097-8c9a-842ad87dec95" containerName="git-clone" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589706 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce43550-9c31-4097-8c9a-842ad87dec95" containerName="git-clone" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589807 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="933147f5-d93c-4873-918d-b597df00225e" containerName="docker-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.589817 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ce43550-9c31-4097-8c9a-842ad87dec95" containerName="git-clone" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.638526 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.638687 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.641124 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.641248 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-gf97b\"" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.642279 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.643572 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738648 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738702 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738723 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738773 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5t8\" (UniqueName: \"kubernetes.io/projected/2f8479b3-f753-4bd7-b010-a100a2a17490-kube-api-access-5d5t8\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738854 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738908 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738932 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738951 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.738990 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.739008 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.739025 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.840641 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.840979 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.841088 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5d5t8\" (UniqueName: \"kubernetes.io/projected/2f8479b3-f753-4bd7-b010-a100a2a17490-kube-api-access-5d5t8\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.840786 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.841332 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.841792 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842388 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.841736 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842422 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.841458 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842458 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842502 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842521 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842567 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842586 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842711 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842916 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.842941 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.843071 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.843295 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.844056 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.848370 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.848393 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.857689 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d5t8\" (UniqueName: \"kubernetes.io/projected/2f8479b3-f753-4bd7-b010-a100a2a17490-kube-api-access-5d5t8\") pod \"service-telemetry-operator-3-build\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:18 crc kubenswrapper[5120]: I1208 19:43:18.957058 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:19 crc kubenswrapper[5120]: I1208 19:43:19.209143 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:43:19 crc kubenswrapper[5120]: W1208 19:43:19.215270 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f8479b3_f753_4bd7_b010_a100a2a17490.slice/crio-c35794e61a50a6e6e7f888531bc0ff9b459858cf15ff0ea5c681972092ebd367 WatchSource:0}: Error finding container c35794e61a50a6e6e7f888531bc0ff9b459858cf15ff0ea5c681972092ebd367: Status 404 returned error can't find the container with id c35794e61a50a6e6e7f888531bc0ff9b459858cf15ff0ea5c681972092ebd367 Dec 08 19:43:20 crc kubenswrapper[5120]: I1208 19:43:20.134758 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2f8479b3-f753-4bd7-b010-a100a2a17490","Type":"ContainerStarted","Data":"295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0"} Dec 08 19:43:20 crc kubenswrapper[5120]: I1208 19:43:20.135218 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2f8479b3-f753-4bd7-b010-a100a2a17490","Type":"ContainerStarted","Data":"c35794e61a50a6e6e7f888531bc0ff9b459858cf15ff0ea5c681972092ebd367"} Dec 08 19:43:20 crc kubenswrapper[5120]: I1208 19:43:20.200914 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45094: no serving certificate available for the kubelet" Dec 08 19:43:21 crc kubenswrapper[5120]: I1208 19:43:21.233152 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.149611 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="2f8479b3-f753-4bd7-b010-a100a2a17490" containerName="git-clone" containerID="cri-o://295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0" gracePeriod=30 Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.566096 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_2f8479b3-f753-4bd7-b010-a100a2a17490/git-clone/0.log" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.566181 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601208 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-system-configs\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601635 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-build-blob-cache\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601677 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-buildcachedir\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601733 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-node-pullsecrets\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601783 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-root\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601822 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601846 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-proxy-ca-bundles\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601883 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.601904 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-push\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602059 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-run\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602032 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602082 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602154 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-ca-bundles\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602350 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-buildworkdir\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602388 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602515 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d5t8\" (UniqueName: \"kubernetes.io/projected/2f8479b3-f753-4bd7-b010-a100a2a17490-kube-api-access-5d5t8\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602667 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-pull\") pod \"2f8479b3-f753-4bd7-b010-a100a2a17490\" (UID: \"2f8479b3-f753-4bd7-b010-a100a2a17490\") " Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602722 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602777 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.602875 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.604584 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.604898 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.604920 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.604955 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.604972 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.604989 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.605060 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.609100 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-pull" (OuterVolumeSpecName: "builder-dockercfg-gf97b-pull") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "builder-dockercfg-gf97b-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.609432 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-push" (OuterVolumeSpecName: "builder-dockercfg-gf97b-push") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "builder-dockercfg-gf97b-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.615357 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8479b3-f753-4bd7-b010-a100a2a17490-kube-api-access-5d5t8" (OuterVolumeSpecName: "kube-api-access-5d5t8") pod "2f8479b3-f753-4bd7-b010-a100a2a17490" (UID: "2f8479b3-f753-4bd7-b010-a100a2a17490"). InnerVolumeSpecName "kube-api-access-5d5t8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.706610 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8479b3-f753-4bd7-b010-a100a2a17490-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.706940 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.706960 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2f8479b3-f753-4bd7-b010-a100a2a17490-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.706971 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5d5t8\" (UniqueName: \"kubernetes.io/projected/2f8479b3-f753-4bd7-b010-a100a2a17490-kube-api-access-5d5t8\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.706983 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/2f8479b3-f753-4bd7-b010-a100a2a17490-builder-dockercfg-gf97b-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:22 crc kubenswrapper[5120]: I1208 19:43:22.706994 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2f8479b3-f753-4bd7-b010-a100a2a17490-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.158451 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_2f8479b3-f753-4bd7-b010-a100a2a17490/git-clone/0.log" Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.158520 5120 generic.go:358] "Generic (PLEG): container finished" podID="2f8479b3-f753-4bd7-b010-a100a2a17490" containerID="295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0" exitCode=1 Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.158650 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.158704 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2f8479b3-f753-4bd7-b010-a100a2a17490","Type":"ContainerDied","Data":"295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0"} Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.158776 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"2f8479b3-f753-4bd7-b010-a100a2a17490","Type":"ContainerDied","Data":"c35794e61a50a6e6e7f888531bc0ff9b459858cf15ff0ea5c681972092ebd367"} Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.158798 5120 scope.go:117] "RemoveContainer" containerID="295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0" Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.183943 5120 scope.go:117] "RemoveContainer" containerID="295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0" Dec 08 19:43:23 crc kubenswrapper[5120]: E1208 19:43:23.184946 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0\": container with ID starting with 295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0 not found: ID does not exist" containerID="295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0" Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.184986 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0"} err="failed to get container status \"295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0\": rpc error: code = NotFound desc = could not find container \"295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0\": container with ID starting with 295a841c5193cc4d9246033fae14a063f652675dbccf4f991da7dbd4800fbac0 not found: ID does not exist" Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.200823 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.205063 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 19:43:23 crc kubenswrapper[5120]: I1208 19:43:23.666189 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8479b3-f753-4bd7-b010-a100a2a17490" path="/var/lib/kubelet/pods/2f8479b3-f753-4bd7-b010-a100a2a17490/volumes" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.708019 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.709541 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f8479b3-f753-4bd7-b010-a100a2a17490" containerName="git-clone" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.709564 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8479b3-f753-4bd7-b010-a100a2a17490" containerName="git-clone" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.709743 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2f8479b3-f753-4bd7-b010-a100a2a17490" containerName="git-clone" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.723125 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.724063 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.727668 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.727678 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.727793 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.727842 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-gf97b\"" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859508 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp2n9\" (UniqueName: \"kubernetes.io/projected/967d335e-fc1e-4111-834a-7710e4450f3c-kube-api-access-pp2n9\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859586 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859627 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859740 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859818 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859866 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859901 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.859940 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.860005 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.860067 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.860181 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.860355 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962070 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962115 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962156 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962417 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962440 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962621 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962666 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962741 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962761 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962837 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.962859 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963002 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pp2n9\" (UniqueName: \"kubernetes.io/projected/967d335e-fc1e-4111-834a-7710e4450f3c-kube-api-access-pp2n9\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963018 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963082 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963267 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963364 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963373 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963409 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.963635 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.968988 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.968999 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:32 crc kubenswrapper[5120]: I1208 19:43:32.978747 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp2n9\" (UniqueName: \"kubernetes.io/projected/967d335e-fc1e-4111-834a-7710e4450f3c-kube-api-access-pp2n9\") pod \"service-telemetry-operator-4-build\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:33 crc kubenswrapper[5120]: I1208 19:43:33.044813 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:33 crc kubenswrapper[5120]: I1208 19:43:33.658326 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:43:34 crc kubenswrapper[5120]: I1208 19:43:34.250161 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"967d335e-fc1e-4111-834a-7710e4450f3c","Type":"ContainerStarted","Data":"e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756"} Dec 08 19:43:34 crc kubenswrapper[5120]: I1208 19:43:34.250235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"967d335e-fc1e-4111-834a-7710e4450f3c","Type":"ContainerStarted","Data":"ac66daf2afbc7b6f2c79824ab8d106c08946be0c87de5208afd3d18228d8172b"} Dec 08 19:43:34 crc kubenswrapper[5120]: I1208 19:43:34.300333 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49554: no serving certificate available for the kubelet" Dec 08 19:43:35 crc kubenswrapper[5120]: I1208 19:43:35.335617 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.265700 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="967d335e-fc1e-4111-834a-7710e4450f3c" containerName="git-clone" containerID="cri-o://e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756" gracePeriod=30 Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.673441 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_967d335e-fc1e-4111-834a-7710e4450f3c/git-clone/0.log" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.673904 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.819975 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-system-configs\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.820121 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-run\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.820384 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-build-blob-cache\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.820507 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-ca-bundles\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.820827 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-buildcachedir\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.820905 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-proxy-ca-bundles\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.820964 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-buildworkdir\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.820968 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821087 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-root\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821152 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-push\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821159 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821286 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-pull\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-node-pullsecrets\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821435 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821457 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp2n9\" (UniqueName: \"kubernetes.io/projected/967d335e-fc1e-4111-834a-7710e4450f3c-kube-api-access-pp2n9\") pod \"967d335e-fc1e-4111-834a-7710e4450f3c\" (UID: \"967d335e-fc1e-4111-834a-7710e4450f3c\") " Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821887 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.821997 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822031 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822043 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822452 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822887 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822923 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822940 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822958 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822970 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.822989 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.823001 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/967d335e-fc1e-4111-834a-7710e4450f3c-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.823033 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/967d335e-fc1e-4111-834a-7710e4450f3c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.823401 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.827952 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-pull" (OuterVolumeSpecName: "builder-dockercfg-gf97b-pull") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "builder-dockercfg-gf97b-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.828035 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-push" (OuterVolumeSpecName: "builder-dockercfg-gf97b-push") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "builder-dockercfg-gf97b-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.833473 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967d335e-fc1e-4111-834a-7710e4450f3c-kube-api-access-pp2n9" (OuterVolumeSpecName: "kube-api-access-pp2n9") pod "967d335e-fc1e-4111-834a-7710e4450f3c" (UID: "967d335e-fc1e-4111-834a-7710e4450f3c"). InnerVolumeSpecName "kube-api-access-pp2n9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.924729 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.924772 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pp2n9\" (UniqueName: \"kubernetes.io/projected/967d335e-fc1e-4111-834a-7710e4450f3c-kube-api-access-pp2n9\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.924781 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/967d335e-fc1e-4111-834a-7710e4450f3c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:36 crc kubenswrapper[5120]: I1208 19:43:36.924789 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/967d335e-fc1e-4111-834a-7710e4450f3c-builder-dockercfg-gf97b-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.274028 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_967d335e-fc1e-4111-834a-7710e4450f3c/git-clone/0.log" Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.274095 5120 generic.go:358] "Generic (PLEG): container finished" podID="967d335e-fc1e-4111-834a-7710e4450f3c" containerID="e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756" exitCode=1 Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.274357 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"967d335e-fc1e-4111-834a-7710e4450f3c","Type":"ContainerDied","Data":"e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756"} Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.274404 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"967d335e-fc1e-4111-834a-7710e4450f3c","Type":"ContainerDied","Data":"ac66daf2afbc7b6f2c79824ab8d106c08946be0c87de5208afd3d18228d8172b"} Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.274433 5120 scope.go:117] "RemoveContainer" containerID="e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756" Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.274634 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.311028 5120 scope.go:117] "RemoveContainer" containerID="e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756" Dec 08 19:43:37 crc kubenswrapper[5120]: E1208 19:43:37.311475 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756\": container with ID starting with e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756 not found: ID does not exist" containerID="e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756" Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.311532 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756"} err="failed to get container status \"e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756\": rpc error: code = NotFound desc = could not find container \"e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756\": container with ID starting with e8b815942a6ec8e61f36372af1808e80aa8576d5d9d9e358eddb1dfda7a97756 not found: ID does not exist" Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.319529 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.323514 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 19:43:37 crc kubenswrapper[5120]: I1208 19:43:37.668417 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967d335e-fc1e-4111-834a-7710e4450f3c" path="/var/lib/kubelet/pods/967d335e-fc1e-4111-834a-7710e4450f3c/volumes" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.781802 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.782923 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="967d335e-fc1e-4111-834a-7710e4450f3c" containerName="git-clone" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.782936 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="967d335e-fc1e-4111-834a-7710e4450f3c" containerName="git-clone" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.783033 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="967d335e-fc1e-4111-834a-7710e4450f3c" containerName="git-clone" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.812578 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.812647 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.814955 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-gf97b\"" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.815125 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.815280 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.815479 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.875748 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.875795 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.875822 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.875857 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.875894 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.876019 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzg69\" (UniqueName: \"kubernetes.io/projected/60534ab8-a619-4cbd-9d23-666c576e0d40-kube-api-access-gzg69\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.876143 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.876245 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.876272 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.876293 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.876314 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.876330 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.977764 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.977814 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.977838 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978006 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978100 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978224 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978402 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978448 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978477 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978576 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978741 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.978805 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979153 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979285 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979317 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979341 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzg69\" (UniqueName: \"kubernetes.io/projected/60534ab8-a619-4cbd-9d23-666c576e0d40-kube-api-access-gzg69\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979378 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979396 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979429 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.979631 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.981510 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.986066 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:46 crc kubenswrapper[5120]: I1208 19:43:46.986661 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-push\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:47 crc kubenswrapper[5120]: I1208 19:43:47.003869 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzg69\" (UniqueName: \"kubernetes.io/projected/60534ab8-a619-4cbd-9d23-666c576e0d40-kube-api-access-gzg69\") pod \"service-telemetry-operator-5-build\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:47 crc kubenswrapper[5120]: I1208 19:43:47.145921 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:47 crc kubenswrapper[5120]: I1208 19:43:47.376548 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:43:48 crc kubenswrapper[5120]: I1208 19:43:48.360562 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"60534ab8-a619-4cbd-9d23-666c576e0d40","Type":"ContainerStarted","Data":"885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa"} Dec 08 19:43:48 crc kubenswrapper[5120]: I1208 19:43:48.360926 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"60534ab8-a619-4cbd-9d23-666c576e0d40","Type":"ContainerStarted","Data":"24faf622c745ffdea77afd31db60faff33c33cf218aa9c93284ec21d3dde1b87"} Dec 08 19:43:48 crc kubenswrapper[5120]: I1208 19:43:48.422239 5120 ???:1] "http: TLS handshake error from 192.168.126.11:38730: no serving certificate available for the kubelet" Dec 08 19:43:49 crc kubenswrapper[5120]: I1208 19:43:49.462852 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.372504 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="60534ab8-a619-4cbd-9d23-666c576e0d40" containerName="git-clone" containerID="cri-o://885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa" gracePeriod=30 Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.800301 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_60534ab8-a619-4cbd-9d23-666c576e0d40/git-clone/0.log" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.800747 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952209 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-buildworkdir\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952606 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-node-pullsecrets\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952625 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzg69\" (UniqueName: \"kubernetes.io/projected/60534ab8-a619-4cbd-9d23-666c576e0d40-kube-api-access-gzg69\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952659 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-proxy-ca-bundles\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952676 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-push\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952738 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-run\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952728 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952760 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952813 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-system-configs\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952833 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-buildcachedir\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952866 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-ca-bundles\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952931 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-build-blob-cache\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952973 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-pull\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.952995 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953019 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-root\") pod \"60534ab8-a619-4cbd-9d23-666c576e0d40\" (UID: \"60534ab8-a619-4cbd-9d23-666c576e0d40\") " Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953306 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953341 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953354 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953354 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953447 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953549 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953754 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.953966 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.955012 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.964395 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-pull" (OuterVolumeSpecName: "builder-dockercfg-gf97b-pull") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "builder-dockercfg-gf97b-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.964420 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-push" (OuterVolumeSpecName: "builder-dockercfg-gf97b-push") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "builder-dockercfg-gf97b-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5120]: I1208 19:43:50.964465 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60534ab8-a619-4cbd-9d23-666c576e0d40-kube-api-access-gzg69" (OuterVolumeSpecName: "kube-api-access-gzg69") pod "60534ab8-a619-4cbd-9d23-666c576e0d40" (UID: "60534ab8-a619-4cbd-9d23-666c576e0d40"). InnerVolumeSpecName "kube-api-access-gzg69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054126 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054157 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60534ab8-a619-4cbd-9d23-666c576e0d40-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054179 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054188 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054197 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-pull\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-pull\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054205 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60534ab8-a619-4cbd-9d23-666c576e0d40-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054213 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gzg69\" (UniqueName: \"kubernetes.io/projected/60534ab8-a619-4cbd-9d23-666c576e0d40-kube-api-access-gzg69\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054222 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60534ab8-a619-4cbd-9d23-666c576e0d40-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.054232 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-gf97b-push\" (UniqueName: \"kubernetes.io/secret/60534ab8-a619-4cbd-9d23-666c576e0d40-builder-dockercfg-gf97b-push\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.382836 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_60534ab8-a619-4cbd-9d23-666c576e0d40/git-clone/0.log" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.383279 5120 generic.go:358] "Generic (PLEG): container finished" podID="60534ab8-a619-4cbd-9d23-666c576e0d40" containerID="885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa" exitCode=1 Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.383428 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.383382 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"60534ab8-a619-4cbd-9d23-666c576e0d40","Type":"ContainerDied","Data":"885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa"} Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.383553 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"60534ab8-a619-4cbd-9d23-666c576e0d40","Type":"ContainerDied","Data":"24faf622c745ffdea77afd31db60faff33c33cf218aa9c93284ec21d3dde1b87"} Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.383604 5120 scope.go:117] "RemoveContainer" containerID="885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.426408 5120 scope.go:117] "RemoveContainer" containerID="885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa" Dec 08 19:43:51 crc kubenswrapper[5120]: E1208 19:43:51.427038 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa\": container with ID starting with 885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa not found: ID does not exist" containerID="885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.427076 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa"} err="failed to get container status \"885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa\": rpc error: code = NotFound desc = could not find container \"885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa\": container with ID starting with 885961282dccb37746da1189a454b9e580b1189c483ef97b567c2b0c95d4c0fa not found: ID does not exist" Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.428530 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.433518 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 19:43:51 crc kubenswrapper[5120]: I1208 19:43:51.673748 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60534ab8-a619-4cbd-9d23-666c576e0d40" path="/var/lib/kubelet/pods/60534ab8-a619-4cbd-9d23-666c576e0d40/volumes" Dec 08 19:44:07 crc kubenswrapper[5120]: I1208 19:44:07.962253 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:44:07 crc kubenswrapper[5120]: I1208 19:44:07.962947 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6b564684c8-bz7k2_97b101ad-fe48-408d-8965-af78f6b66e12/cluster-samples-operator/0.log" Dec 08 19:44:07 crc kubenswrapper[5120]: I1208 19:44:07.972937 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t6dx4_0b722a01-9c2b-4e79-a301-c728aa5a90a1/kube-multus/0.log" Dec 08 19:44:07 crc kubenswrapper[5120]: I1208 19:44:07.973085 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t6dx4_0b722a01-9c2b-4e79-a301-c728aa5a90a1/kube-multus/0.log" Dec 08 19:44:07 crc kubenswrapper[5120]: I1208 19:44:07.977670 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:44:07 crc kubenswrapper[5120]: I1208 19:44:07.977693 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.000729 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6s9/must-gather-sd6tn"] Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.001989 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60534ab8-a619-4cbd-9d23-666c576e0d40" containerName="git-clone" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.002001 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="60534ab8-a619-4cbd-9d23-666c576e0d40" containerName="git-clone" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.002133 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="60534ab8-a619-4cbd-9d23-666c576e0d40" containerName="git-clone" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.033807 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sg6s9/must-gather-sd6tn"] Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.033925 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.036059 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-sg6s9\"/\"default-dockercfg-9bjl6\"" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.037188 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-sg6s9\"/\"openshift-service-ca.crt\"" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.037232 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-sg6s9\"/\"kube-root-ca.crt\"" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.104520 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wzt\" (UniqueName: \"kubernetes.io/projected/368c5fa6-33e4-4d9a-ba67-3e686206fdef-kube-api-access-t9wzt\") pod \"must-gather-sd6tn\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.104663 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/368c5fa6-33e4-4d9a-ba67-3e686206fdef-must-gather-output\") pod \"must-gather-sd6tn\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.205876 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/368c5fa6-33e4-4d9a-ba67-3e686206fdef-must-gather-output\") pod \"must-gather-sd6tn\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.205929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wzt\" (UniqueName: \"kubernetes.io/projected/368c5fa6-33e4-4d9a-ba67-3e686206fdef-kube-api-access-t9wzt\") pod \"must-gather-sd6tn\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.206673 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/368c5fa6-33e4-4d9a-ba67-3e686206fdef-must-gather-output\") pod \"must-gather-sd6tn\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.224006 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wzt\" (UniqueName: \"kubernetes.io/projected/368c5fa6-33e4-4d9a-ba67-3e686206fdef-kube-api-access-t9wzt\") pod \"must-gather-sd6tn\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.351370 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:44:37 crc kubenswrapper[5120]: I1208 19:44:37.778107 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sg6s9/must-gather-sd6tn"] Dec 08 19:44:38 crc kubenswrapper[5120]: I1208 19:44:38.714414 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" event={"ID":"368c5fa6-33e4-4d9a-ba67-3e686206fdef","Type":"ContainerStarted","Data":"7a96c277fb161d64024d930acf5b714f8371b5aa9f5947de08c3bcf7e626cd2b"} Dec 08 19:44:44 crc kubenswrapper[5120]: I1208 19:44:44.754640 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" event={"ID":"368c5fa6-33e4-4d9a-ba67-3e686206fdef","Type":"ContainerStarted","Data":"18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722"} Dec 08 19:44:45 crc kubenswrapper[5120]: I1208 19:44:45.760989 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" event={"ID":"368c5fa6-33e4-4d9a-ba67-3e686206fdef","Type":"ContainerStarted","Data":"8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2"} Dec 08 19:44:45 crc kubenswrapper[5120]: I1208 19:44:45.781687 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" podStartSLOduration=3.154596394 podStartE2EDuration="9.781667819s" podCreationTimestamp="2025-12-08 19:44:36 +0000 UTC" firstStartedPulling="2025-12-08 19:44:37.78941101 +0000 UTC m=+930.461517659" lastFinishedPulling="2025-12-08 19:44:44.416482415 +0000 UTC m=+937.088589084" observedRunningTime="2025-12-08 19:44:45.780091589 +0000 UTC m=+938.452198228" watchObservedRunningTime="2025-12-08 19:44:45.781667819 +0000 UTC m=+938.453774478" Dec 08 19:44:46 crc kubenswrapper[5120]: I1208 19:44:46.998982 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49524: no serving certificate available for the kubelet" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.160752 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs"] Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.180448 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs"] Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.180624 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.183149 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.191631 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.235452 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrq5\" (UniqueName: \"kubernetes.io/projected/159f0787-00a2-4790-836f-d221840fa1da-kube-api-access-2vrq5\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.235550 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/159f0787-00a2-4790-836f-d221840fa1da-config-volume\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.235582 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/159f0787-00a2-4790-836f-d221840fa1da-secret-volume\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.337124 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrq5\" (UniqueName: \"kubernetes.io/projected/159f0787-00a2-4790-836f-d221840fa1da-kube-api-access-2vrq5\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.337221 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/159f0787-00a2-4790-836f-d221840fa1da-config-volume\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.337261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/159f0787-00a2-4790-836f-d221840fa1da-secret-volume\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.338540 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/159f0787-00a2-4790-836f-d221840fa1da-config-volume\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.349281 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/159f0787-00a2-4790-836f-d221840fa1da-secret-volume\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.355754 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrq5\" (UniqueName: \"kubernetes.io/projected/159f0787-00a2-4790-836f-d221840fa1da-kube-api-access-2vrq5\") pod \"collect-profiles-29420385-g2nzs\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.499139 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.935646 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs"] Dec 08 19:45:00 crc kubenswrapper[5120]: I1208 19:45:00.944764 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:45:01 crc kubenswrapper[5120]: I1208 19:45:01.858413 5120 generic.go:358] "Generic (PLEG): container finished" podID="159f0787-00a2-4790-836f-d221840fa1da" containerID="ddf7840bef7aec93557e2f7191c5788bb6326187b1a7c86a13f818638dfccbfe" exitCode=0 Dec 08 19:45:01 crc kubenswrapper[5120]: I1208 19:45:01.858480 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" event={"ID":"159f0787-00a2-4790-836f-d221840fa1da","Type":"ContainerDied","Data":"ddf7840bef7aec93557e2f7191c5788bb6326187b1a7c86a13f818638dfccbfe"} Dec 08 19:45:01 crc kubenswrapper[5120]: I1208 19:45:01.858857 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" event={"ID":"159f0787-00a2-4790-836f-d221840fa1da","Type":"ContainerStarted","Data":"de9bbd53a2d3d6269f7d375401e59c1385ee7ddfb7b915f56864f6b312b566b0"} Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.140780 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.277151 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/159f0787-00a2-4790-836f-d221840fa1da-config-volume\") pod \"159f0787-00a2-4790-836f-d221840fa1da\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.277234 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/159f0787-00a2-4790-836f-d221840fa1da-secret-volume\") pod \"159f0787-00a2-4790-836f-d221840fa1da\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.277298 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vrq5\" (UniqueName: \"kubernetes.io/projected/159f0787-00a2-4790-836f-d221840fa1da-kube-api-access-2vrq5\") pod \"159f0787-00a2-4790-836f-d221840fa1da\" (UID: \"159f0787-00a2-4790-836f-d221840fa1da\") " Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.277846 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159f0787-00a2-4790-836f-d221840fa1da-config-volume" (OuterVolumeSpecName: "config-volume") pod "159f0787-00a2-4790-836f-d221840fa1da" (UID: "159f0787-00a2-4790-836f-d221840fa1da"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.283478 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/159f0787-00a2-4790-836f-d221840fa1da-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "159f0787-00a2-4790-836f-d221840fa1da" (UID: "159f0787-00a2-4790-836f-d221840fa1da"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.283790 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159f0787-00a2-4790-836f-d221840fa1da-kube-api-access-2vrq5" (OuterVolumeSpecName: "kube-api-access-2vrq5") pod "159f0787-00a2-4790-836f-d221840fa1da" (UID: "159f0787-00a2-4790-836f-d221840fa1da"). InnerVolumeSpecName "kube-api-access-2vrq5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.379054 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/159f0787-00a2-4790-836f-d221840fa1da-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.379090 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/159f0787-00a2-4790-836f-d221840fa1da-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.379102 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vrq5\" (UniqueName: \"kubernetes.io/projected/159f0787-00a2-4790-836f-d221840fa1da-kube-api-access-2vrq5\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.872958 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" event={"ID":"159f0787-00a2-4790-836f-d221840fa1da","Type":"ContainerDied","Data":"de9bbd53a2d3d6269f7d375401e59c1385ee7ddfb7b915f56864f6b312b566b0"} Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.873000 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de9bbd53a2d3d6269f7d375401e59c1385ee7ddfb7b915f56864f6b312b566b0" Dec 08 19:45:03 crc kubenswrapper[5120]: I1208 19:45:03.873076 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-g2nzs" Dec 08 19:45:22 crc kubenswrapper[5120]: I1208 19:45:22.352518 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40224: no serving certificate available for the kubelet" Dec 08 19:45:22 crc kubenswrapper[5120]: I1208 19:45:22.518077 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40228: no serving certificate available for the kubelet" Dec 08 19:45:22 crc kubenswrapper[5120]: I1208 19:45:22.526368 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40244: no serving certificate available for the kubelet" Dec 08 19:45:23 crc kubenswrapper[5120]: I1208 19:45:23.035550 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:45:23 crc kubenswrapper[5120]: I1208 19:45:23.035843 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:45:33 crc kubenswrapper[5120]: I1208 19:45:33.837783 5120 ???:1] "http: TLS handshake error from 192.168.126.11:42040: no serving certificate available for the kubelet" Dec 08 19:45:33 crc kubenswrapper[5120]: I1208 19:45:33.951034 5120 ???:1] "http: TLS handshake error from 192.168.126.11:42044: no serving certificate available for the kubelet" Dec 08 19:45:34 crc kubenswrapper[5120]: I1208 19:45:34.022854 5120 ???:1] "http: TLS handshake error from 192.168.126.11:42056: no serving certificate available for the kubelet" Dec 08 19:45:40 crc kubenswrapper[5120]: E1208 19:45:40.653523 5120 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.699033 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.709261 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.727679 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44582: no serving certificate available for the kubelet" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.761066 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44596: no serving certificate available for the kubelet" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.791714 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44598: no serving certificate available for the kubelet" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.831498 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44600: no serving certificate available for the kubelet" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.894979 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44602: no serving certificate available for the kubelet" Dec 08 19:45:42 crc kubenswrapper[5120]: I1208 19:45:42.996160 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44614: no serving certificate available for the kubelet" Dec 08 19:45:43 crc kubenswrapper[5120]: I1208 19:45:43.178106 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44620: no serving certificate available for the kubelet" Dec 08 19:45:43 crc kubenswrapper[5120]: I1208 19:45:43.534089 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44622: no serving certificate available for the kubelet" Dec 08 19:45:44 crc kubenswrapper[5120]: I1208 19:45:44.203319 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44628: no serving certificate available for the kubelet" Dec 08 19:45:45 crc kubenswrapper[5120]: I1208 19:45:45.504031 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44630: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.095019 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44640: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.124140 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44656: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.294557 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44672: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.323236 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44678: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.327640 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44688: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.471407 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44696: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.506995 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44702: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.514158 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44708: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.645493 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44716: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.836047 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44732: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.844836 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44746: no serving certificate available for the kubelet" Dec 08 19:45:48 crc kubenswrapper[5120]: I1208 19:45:48.904522 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44748: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.030478 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44756: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.036051 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44764: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.079290 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44774: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.211902 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44780: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.404021 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44784: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.409233 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44800: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.414446 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44804: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.561195 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44812: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.592959 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44818: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.595151 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44826: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.724083 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44834: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.925035 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44846: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.927111 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44862: no serving certificate available for the kubelet" Dec 08 19:45:49 crc kubenswrapper[5120]: I1208 19:45:49.974698 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44868: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.105882 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44880: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.122279 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44892: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.126235 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44900: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.289573 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44916: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.439579 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44928: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.455016 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44934: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.471446 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44940: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.598008 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44956: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.628818 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44972: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.640816 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44982: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.766520 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44994: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.903571 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45010: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.923449 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45016: no serving certificate available for the kubelet" Dec 08 19:45:50 crc kubenswrapper[5120]: I1208 19:45:50.926193 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45026: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.068867 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45032: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.088109 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45038: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.108222 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45052: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.115988 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45054: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.245981 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45062: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.403575 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45070: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.432661 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45076: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.433588 5120 ???:1] "http: TLS handshake error from 192.168.126.11:45092: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.603128 5120 ???:1] "http: TLS handshake error from 192.168.126.11:38598: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.603514 5120 ???:1] "http: TLS handshake error from 192.168.126.11:38592: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5120]: I1208 19:45:51.607530 5120 ???:1] "http: TLS handshake error from 192.168.126.11:38600: no serving certificate available for the kubelet" Dec 08 19:45:53 crc kubenswrapper[5120]: I1208 19:45:53.035403 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:45:53 crc kubenswrapper[5120]: I1208 19:45:53.035777 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:45:53 crc kubenswrapper[5120]: I1208 19:45:53.244841 5120 ???:1] "http: TLS handshake error from 192.168.126.11:38612: no serving certificate available for the kubelet" Dec 08 19:46:03 crc kubenswrapper[5120]: I1208 19:46:03.354936 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46964: no serving certificate available for the kubelet" Dec 08 19:46:03 crc kubenswrapper[5120]: I1208 19:46:03.501089 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46980: no serving certificate available for the kubelet" Dec 08 19:46:03 crc kubenswrapper[5120]: I1208 19:46:03.513411 5120 ???:1] "http: TLS handshake error from 192.168.126.11:47006: no serving certificate available for the kubelet" Dec 08 19:46:03 crc kubenswrapper[5120]: I1208 19:46:03.513604 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46996: no serving certificate available for the kubelet" Dec 08 19:46:03 crc kubenswrapper[5120]: I1208 19:46:03.688132 5120 ???:1] "http: TLS handshake error from 192.168.126.11:47020: no serving certificate available for the kubelet" Dec 08 19:46:03 crc kubenswrapper[5120]: I1208 19:46:03.726344 5120 ???:1] "http: TLS handshake error from 192.168.126.11:47030: no serving certificate available for the kubelet" Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.034943 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.036056 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.036204 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.037341 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30d8ca69d90945bbdca3222165929da632dddcd95ec26a74ce2884c7fac2c88c"} pod="openshift-machine-config-operator/machine-config-daemon-5j87q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.037452 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" containerID="cri-o://30d8ca69d90945bbdca3222165929da632dddcd95ec26a74ce2884c7fac2c88c" gracePeriod=600 Dec 08 19:46:23 crc kubenswrapper[5120]: E1208 19:46:23.147809 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fab2759_7b9c_43f9_a2b0_5e481a7f0cae.slice/crio-conmon-30d8ca69d90945bbdca3222165929da632dddcd95ec26a74ce2884c7fac2c88c.scope\": RecentStats: unable to find data in memory cache]" Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.385637 5120 generic.go:358] "Generic (PLEG): container finished" podID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerID="30d8ca69d90945bbdca3222165929da632dddcd95ec26a74ce2884c7fac2c88c" exitCode=0 Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.385683 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerDied","Data":"30d8ca69d90945bbdca3222165929da632dddcd95ec26a74ce2884c7fac2c88c"} Dec 08 19:46:23 crc kubenswrapper[5120]: I1208 19:46:23.385712 5120 scope.go:117] "RemoveContainer" containerID="0013a9518e030fb621458e94cf0445454fda4310af57dd86ad44617260dc5da5" Dec 08 19:46:24 crc kubenswrapper[5120]: I1208 19:46:24.015578 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34140: no serving certificate available for the kubelet" Dec 08 19:46:24 crc kubenswrapper[5120]: I1208 19:46:24.398725 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" event={"ID":"2fab2759-7b9c-43f9-a2b0-5e481a7f0cae","Type":"ContainerStarted","Data":"cd99c285e77ae83d7cc292ca7bf9e800f54bb6a3dfe6f1b3537e537743275c03"} Dec 08 19:46:40 crc kubenswrapper[5120]: I1208 19:46:40.524243 5120 generic.go:358] "Generic (PLEG): container finished" podID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" containerID="18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722" exitCode=0 Dec 08 19:46:40 crc kubenswrapper[5120]: I1208 19:46:40.524294 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" event={"ID":"368c5fa6-33e4-4d9a-ba67-3e686206fdef","Type":"ContainerDied","Data":"18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722"} Dec 08 19:46:40 crc kubenswrapper[5120]: I1208 19:46:40.525532 5120 scope.go:117] "RemoveContainer" containerID="18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.015519 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35638: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.201692 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35654: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.216976 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35660: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.240923 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35672: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.251694 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35676: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.267270 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35690: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.281024 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35704: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.295726 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35718: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.305510 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35722: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.447159 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35734: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.456641 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35738: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.476098 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35748: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.486034 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35760: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.498584 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35776: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.508227 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35780: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.519823 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35788: no serving certificate available for the kubelet" Dec 08 19:46:47 crc kubenswrapper[5120]: I1208 19:46:47.535044 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35798: no serving certificate available for the kubelet" Dec 08 19:46:52 crc kubenswrapper[5120]: I1208 19:46:52.576704 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6s9/must-gather-sd6tn"] Dec 08 19:46:52 crc kubenswrapper[5120]: I1208 19:46:52.578750 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" containerName="copy" containerID="cri-o://8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2" gracePeriod=2 Dec 08 19:46:52 crc kubenswrapper[5120]: I1208 19:46:52.581063 5120 status_manager.go:895] "Failed to get status for pod" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" err="pods \"must-gather-sd6tn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-sg6s9\": no relationship found between node 'crc' and this object" Dec 08 19:46:52 crc kubenswrapper[5120]: I1208 19:46:52.588455 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6s9/must-gather-sd6tn"] Dec 08 19:46:52 crc kubenswrapper[5120]: I1208 19:46:52.911593 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6s9_must-gather-sd6tn_368c5fa6-33e4-4d9a-ba67-3e686206fdef/copy/0.log" Dec 08 19:46:52 crc kubenswrapper[5120]: I1208 19:46:52.912434 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:46:52 crc kubenswrapper[5120]: I1208 19:46:52.913886 5120 status_manager.go:895] "Failed to get status for pod" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" err="pods \"must-gather-sd6tn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-sg6s9\": no relationship found between node 'crc' and this object" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.067491 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/368c5fa6-33e4-4d9a-ba67-3e686206fdef-must-gather-output\") pod \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.067581 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9wzt\" (UniqueName: \"kubernetes.io/projected/368c5fa6-33e4-4d9a-ba67-3e686206fdef-kube-api-access-t9wzt\") pod \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\" (UID: \"368c5fa6-33e4-4d9a-ba67-3e686206fdef\") " Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.073811 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368c5fa6-33e4-4d9a-ba67-3e686206fdef-kube-api-access-t9wzt" (OuterVolumeSpecName: "kube-api-access-t9wzt") pod "368c5fa6-33e4-4d9a-ba67-3e686206fdef" (UID: "368c5fa6-33e4-4d9a-ba67-3e686206fdef"). InnerVolumeSpecName "kube-api-access-t9wzt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.108812 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368c5fa6-33e4-4d9a-ba67-3e686206fdef-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "368c5fa6-33e4-4d9a-ba67-3e686206fdef" (UID: "368c5fa6-33e4-4d9a-ba67-3e686206fdef"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.168526 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t9wzt\" (UniqueName: \"kubernetes.io/projected/368c5fa6-33e4-4d9a-ba67-3e686206fdef-kube-api-access-t9wzt\") on node \"crc\" DevicePath \"\"" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.168570 5120 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/368c5fa6-33e4-4d9a-ba67-3e686206fdef-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.633569 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6s9_must-gather-sd6tn_368c5fa6-33e4-4d9a-ba67-3e686206fdef/copy/0.log" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.634361 5120 generic.go:358] "Generic (PLEG): container finished" podID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" containerID="8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2" exitCode=143 Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.634457 5120 scope.go:117] "RemoveContainer" containerID="8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.634492 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.636992 5120 status_manager.go:895] "Failed to get status for pod" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" err="pods \"must-gather-sd6tn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-sg6s9\": no relationship found between node 'crc' and this object" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.664491 5120 scope.go:117] "RemoveContainer" containerID="18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.669266 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" path="/var/lib/kubelet/pods/368c5fa6-33e4-4d9a-ba67-3e686206fdef/volumes" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.739991 5120 scope.go:117] "RemoveContainer" containerID="8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2" Dec 08 19:46:53 crc kubenswrapper[5120]: E1208 19:46:53.740547 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2\": container with ID starting with 8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2 not found: ID does not exist" containerID="8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.740576 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2"} err="failed to get container status \"8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2\": rpc error: code = NotFound desc = could not find container \"8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2\": container with ID starting with 8cd92159eca438d29c3eef95fbc3a2fd0fa3cbdfb0fb5588fc40432bc26432a2 not found: ID does not exist" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.740599 5120 scope.go:117] "RemoveContainer" containerID="18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722" Dec 08 19:46:53 crc kubenswrapper[5120]: E1208 19:46:53.740804 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722\": container with ID starting with 18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722 not found: ID does not exist" containerID="18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722" Dec 08 19:46:53 crc kubenswrapper[5120]: I1208 19:46:53.740820 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722"} err="failed to get container status \"18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722\": rpc error: code = NotFound desc = could not find container \"18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722\": container with ID starting with 18516badc7d16b3dbca60e7f57789e0edae9314e820905cb743d85c93fc4b722 not found: ID does not exist" Dec 08 19:47:05 crc kubenswrapper[5120]: I1208 19:47:04.999672 5120 ???:1] "http: TLS handshake error from 192.168.126.11:54412: no serving certificate available for the kubelet" Dec 08 19:47:23 crc kubenswrapper[5120]: I1208 19:47:23.673683 5120 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod368c5fa6-33e4-4d9a-ba67-3e686206fdef"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod368c5fa6-33e4-4d9a-ba67-3e686206fdef] : Timed out while waiting for systemd to remove kubepods-besteffort-pod368c5fa6_33e4_4d9a_ba67_3e686206fdef.slice" Dec 08 19:47:23 crc kubenswrapper[5120]: E1208 19:47:23.674676 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod368c5fa6-33e4-4d9a-ba67-3e686206fdef] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod368c5fa6-33e4-4d9a-ba67-3e686206fdef] : Timed out while waiting for systemd to remove kubepods-besteffort-pod368c5fa6_33e4_4d9a_ba67_3e686206fdef.slice" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" Dec 08 19:47:23 crc kubenswrapper[5120]: I1208 19:47:23.852029 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" Dec 08 19:47:23 crc kubenswrapper[5120]: I1208 19:47:23.854732 5120 status_manager.go:895] "Failed to get status for pod" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" err="pods \"must-gather-sd6tn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-sg6s9\": no relationship found between node 'crc' and this object" Dec 08 19:47:23 crc kubenswrapper[5120]: I1208 19:47:23.860506 5120 status_manager.go:895] "Failed to get status for pod" podUID="368c5fa6-33e4-4d9a-ba67-3e686206fdef" pod="openshift-must-gather-sg6s9/must-gather-sd6tn" err="pods \"must-gather-sd6tn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-sg6s9\": no relationship found between node 'crc' and this object" Dec 08 19:48:23 crc kubenswrapper[5120]: I1208 19:48:23.034995 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:48:23 crc kubenswrapper[5120]: I1208 19:48:23.035860 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:48:26 crc kubenswrapper[5120]: I1208 19:48:26.952288 5120 ???:1] "http: TLS handshake error from 192.168.126.11:42930: no serving certificate available for the kubelet" Dec 08 19:48:53 crc kubenswrapper[5120]: I1208 19:48:53.035978 5120 patch_prober.go:28] interesting pod/machine-config-daemon-5j87q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:48:53 crc kubenswrapper[5120]: I1208 19:48:53.036710 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5j87q" podUID="2fab2759-7b9c-43f9-a2b0-5e481a7f0cae" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"